Hello Guys,
if you are interested in getting full speed for your sync and iscsi nics: this thread is for you:
On a fresh 2 Node Hyper-v Cluster we did some performance tests and network tuning. Hardware consists of 2x Dell R450 Server with 2x Mellanox Connectx-5 (10/25Gbe) cards for iscsi and sync traffic. All 4 Ports are direct connected to each other.
RAID Controller: Dell H755 with 8x vSAS SSD Drives in RAID-10
Operating System: Microsoft Windows Server 2022
Systems were firmware updated via lifecycle controller and all drivers are up to date.
On initial iperf tests we found out that the mellanox cards are maxing out at 4 Gbit/s with following command: Client 40Gb – networks: iperf.exe -c 172.16.10.12 -t 150 -i 15 -P 12 -w 16M.
If we deactivate tcp autotuninglevel of windows server 2022 the speed jumps to 11 Gbit/s. (netsh int tcp set global autotuninglevel=disabled)
After that we deactivate receive side scaling inside mellanox cards and then we got full speed of 24-25 Gbit/s per link.
So if you want to get the full speed on windows boxes try it out
1. command: netsh int tcp show global -> to check if autotuninglevel is enabled(default)
2. command: netsh int tcp set global autotuninglevel=disabled (disable it)
3. deactivate rss in mellanox cards
After these settings full synchronization speed of starwind vsan jumps from 500 MB/s to ~ 2000 MB/s too.
best regards
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software