Dear Yaroslav,
I've made many tests in various combinations
and below are my finally results:
I've used three node HA configuration of StarWind vSan free whth volume size 1 TB (volume with size 3 TB cannot be created via powershell) on Microsoft Hyper-V Server 2019.
Each node has one dual-port Mellanox ConnectX-6 100 Gbit/s adapter for sync, one dual-port Mellanox ConnectX-4 25 Gbit/s adapter for iscsi/hb, one dual-port Mellanox ConnectX-4 adapter 25 GBit/s for cluster and management in lacp team and dual-port Marwell 4xxx adapter 10 GBit/s for VM external net.
Simple volume in storage space that contain 8x480 GB Micron MAX 5200 disks was used as a storage for StarWind images on the each server.
Each server has 512 GB of RAM and 2xAMD 7543 processors (2.8 GHz, 32 cores), with HT (SMT) is off and configured for max performance.
1. If I use 127.0.0.1 address then I restricted about 40 000 IOPS 4K random write.
2. If I use 10.1.X.Y ip addresses, the diskspd results are more pretty:
~90 000 iops 4K write, ~150 000 iops 4K read and ~1800 MB/s 128K sequential write. The CPU consumption is about 25% max on 2x AMD 7543 (32 core, 2.8 GHz) processors.
At the same time, underlying storage 8x480 GB Micron Max 5200, Simple volume has ~450 000 iops 4K write, ~300 000 iops 4K read and 3 500 MB/s 128K random write.
Below are results for disk queue inside vm and underlying storage
- VM.png (29.49 KiB) Viewed 4930 times
- X.png (30.21 KiB) Viewed 4930 times
As you can see, the disk queue inside of vm is about 197 and at the same time, disk queue for underlying storage is about 9. So, I think that StarWind is a bottleneck.
When I try to copy a file inside of my VM I get following results:
- copy_file.png (10.77 KiB) Viewed 4930 times
If I use 127.0.0.1 address the results are more pretty and average throughput is about 150 MB/s
I hope that my results will helpful for someone who try to find any information about performance of StarWind and you to make your product better
Best regards,
Yury