I recently replaced my favourite OmniOS VM in my HomeLab with TrueNAS Core. Main reason for this was that I wanted to play around with the very cool VAAI features available in TrueNAS Core. I used VAAI ages ago with a NetApp system and I really liked the fact that this not only eases the pain of releasing freed up space from your LUNs, but also dramatically reduces the time it takes to deploy massses of virtual machines. By leveraging VAAI, the whole copy operation will only be carried out on the storage instead of data flowing from your storage to the ESXi host and vice versa. But that is another topic, that I plan to blog about pretty soon. Back to this topic. Another reason why I replaced my OmniOS VM was that for some reason I never manged to increase the NIC throughput to get even close to 10Gbps. It maxed out at around 6Gbps. I never thought that this would be an issue on TrueNas Core, but my tests revealed just the opposite.

Start iperf3 server on ESXi

ssh esxi
cd /usr/lib/vmware/vsan/bin
cp iperf3 iperf3.copy
./iperf3.copy -s

Start iperf3 client on TrueNAS

I was able to send with a very high troughput:

root@truenas[~]# iperf3 -c 192.168.2.10
Connecting to host 192.168.2.10, port 5201
[  5] local 192.168.2.94 port 17515 connected to 192.168.2.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  2.52 GBytes  21.6 Gbits/sec    0   1.63 MBytes
[  5]   1.00-2.00   sec  2.54 GBytes  21.8 Gbits/sec    0   2.00 MBytes
[  5]   2.00-3.00   sec  2.59 GBytes  22.3 Gbits/sec    0   2.00 MBytes
[  5]   3.00-4.00   sec  2.46 GBytes  21.1 Gbits/sec    0   2.00 MBytes
[  5]   4.00-5.00   sec  2.61 GBytes  22.4 Gbits/sec    0   2.00 MBytes
[  5]   5.00-6.00   sec  2.59 GBytes  22.3 Gbits/sec    0   2.00 MBytes
[  5]   6.00-7.00   sec  2.63 GBytes  22.6 Gbits/sec    0   2.00 MBytes
[  5]   7.00-8.00   sec  2.62 GBytes  22.5 Gbits/sec    0   2.00 MBytes
[  5]   8.00-9.00   sec  2.63 GBytes  22.6 Gbits/sec    0   2.00 MBytes
[  5]   9.00-10.00  sec  2.63 GBytes  22.6 Gbits/sec    0   2.00 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  25.8 GBytes  22.2 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  25.8 GBytes  22.2 Gbits/sec                  receiver

iperf Done.

But receiving was even worse compared to OmniOS:

root@truenas[~]# iperf3 -c 192.168.2.10
Connecting to host 192.168.2.10, port 5201
[  5] local 192.168.2.94 port 41045 connected to 192.168.2.10 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   299 MBytes  2.51 Gbits/sec
[  5]   1.00-2.00   sec   236 MBytes  1.98 Gbits/sec
[  5]   2.00-3.00   sec   228 MBytes  1.92 Gbits/sec
[  5]   3.00-4.00   sec   250 MBytes  2.10 Gbits/sec
[  5]   4.00-5.00   sec   217 MBytes  1.81 Gbits/sec
[  5]   5.00-6.00   sec   245 MBytes  2.06 Gbits/sec
[  5]   6.00-7.00   sec   228 MBytes  1.91 Gbits/sec
[  5]   7.00-8.00   sec   228 MBytes  1.91 Gbits/sec
[  5]   8.00-9.00   sec   234 MBytes  1.97 Gbits/sec
[  5]   9.00-10.00  sec   241 MBytes  2.02 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.35 GBytes  2.02 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  2.35 GBytes  2.02 Gbits/sec                  receiver

Very disappointing. So as usual I googled and stumbled across this forum post:

https://www.truenas.com/community/threads/10gbe-esxi-6-5-vmxnet-3-performance-is-poor-with-iperf3-tests.63173/post-452479

that mentioned adding lro and tso options on the NIC would greatly increase the throughput. You can simply add these two options in the TrueNAS WebUI under Interface Settings.


After adding the two options, I rebooted my TrueNAS VM And voilà we can now also receive at a very high throughput:

root@truenas[~]# iperf3 -c 192.168.2.10 -R
Connecting to host 192.168.2.10, port 5201
Reverse mode, remote host 192.168.2.10 is sending
[  5] local 192.168.2.94 port 36979 connected to 192.168.2.10 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.43 GBytes  20.9 Gbits/sec
[  5]   1.00-2.00   sec  2.51 GBytes  21.6 Gbits/sec
[  5]   2.00-3.00   sec  2.77 GBytes  23.8 Gbits/sec
[  5]   3.00-4.00   sec  2.77 GBytes  23.8 Gbits/sec
[  5]   4.00-5.00   sec  2.77 GBytes  23.8 Gbits/sec
[  5]   5.00-6.00   sec  2.77 GBytes  23.8 Gbits/sec
[  5]   6.00-7.00   sec  2.78 GBytes  23.8 Gbits/sec
[  5]   7.00-8.00   sec  2.78 GBytes  23.9 Gbits/sec
[  5]   8.00-9.00   sec  2.80 GBytes  24.0 Gbits/sec
[  5]   9.00-10.00  sec  2.77 GBytes  23.8 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  27.1 GBytes  23.3 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  27.1 GBytes  23.3 Gbits/sec                  receiver

iperf Done.

You can lookup the applied options using ifconfig:

vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:0c:29:c0:51:76
        inet 192.168.2.94 netmask 0xffffff00 broadcast 192.168.2.255
        media: Ethernet autoselect
        status: active
        nd6 options=9<PERFORMNUD,IFDISABLED>

Of course you might just be well off with 2Gbps in case you just use spinning rust in your TrueNAS VM but imagine you have SSDs or even NVMe drives, than 6Gbps can be a slowing factor.

6Gbps ~ 750MByte/s

I for example use a NVMe drive and this can offer > 2500MByte/s. It is a very simple and small change, but it can have an enormous effect on your overall storage performance.