Poor disk performance (iSCSI?)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
cdcooper
Posts: 3
Joined: Wed Dec 15, 2021 3:02 pm

Wed Dec 15, 2021 4:02 pm

So I'm testing this using the CentOS appliances on vSphere and right away I'm hitting a significant disk penalty with either the extra VM layer or the iSCSI connection.
Using a Windows VM and diskspd, with one disk on the underlying VMFS datastore I get read and write each averaging 240 MiB/s. A second disk on the VM is located on the StarWind iSCSI datastore and I'm only able to getting 82 MiB/s for each read and write. Just for kicks I set up a third disk located on our NFS NAS with spinning rust and I'm getting 150 MiB/s over a 10Gb link. I was fully expecting some performance degradation on the starwind datastore due to the additional layers but not this much. This is my first time using iSCSI but I appear to be maxing out under 1Gbps on the appliance interface so I'm looking here first for configuration mistake.

All of this is being done on a single host, I haven't set up any sync or replication with the second host/appliance.
StarWind appliance is on RAID1 datastore, storage drive is 500GB XFS eager zeroed located on a RAID5 datastore with SATA SSDs. Image is 499GB with 512b sectors. VMXNet3 interfaces set to 9000 MTU.
vSphere iSCSI target is connected to the correct 10Gb VMXNet3 interface. No physical adapters are attached to the vSwitch so all communication should remain local. MTU is set to 9000 on both vSwitch and VMkernel port.

Any help or pointers would be appreciated.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Dec 16, 2021 9:07 pm

Please answer a couple of questions.
1. Test VM settings: amount of RAM, number of vCPU, disk types? Single VM does not saturate reads. Please run multiple test VMs.
2. What is the performance like on XFS? We have FIO built-in the VM.

See the performance benchmarking methodology (for Windows though): https://www.starwindsoftware.com/best-p ... practices/
cdcooper
Posts: 3
Joined: Wed Dec 15, 2021 3:02 pm

Wed Jan 05, 2022 10:57 pm

The VMs I'm using to test are 4 vCPU, 8GB RAM, LSI SAS, 100GB Thick Eager Zeroed, NTFS

I've tested now with 2 and 3 VMs and while there are overall improvements with qty, the results are still consistent with the original tests. Read and write are pretty much the same so I'm just showing read speeds.
1 vm (local) = 240MiB/s
2 vm (local) = 202MiB/s each VM
1 vm (iSCSI) = 82MiBs/s
2 vm (iSCSI) = 54MiB/s each VM
3 vm (iSCSI) = 33Mib/s each VM

I tested the ZFS storage with FIO, and my read was 316MiB/s using the following parameters I found online: fio -name zfs -rw=randread -runtime=500 -iodepth 1 -ioengine libaio -direct=1 -bs=4k -size=100000000 -numjobs=500

So it looks like the ZFS performance is ok and am indeed losing speed across the iSCSI link. This host is empty save for the VSAN VM and my testing VMs.

Any ideas where to look?
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 06, 2022 4:26 am

Did you try tweaking the schedulers according to https://knowledgebase.starwindsoftware. ... rformance/?
Please also consider increasing the number of vCPUs to 8 (in 2 sockets).
Also, it might be helpful to enable write-back cache on the underlying RAID (if you use it).
cdcooper
Posts: 3
Joined: Wed Dec 15, 2021 3:02 pm

Thu Jan 06, 2022 6:26 am

Upping the testing VM to 8 vCPU didn't make any difference. The starwind scheduler for sdb was set to cfq, I tried both noop and deadline with no significant difference in either case.
Just for kicks I also ran an iperf test against the iSCSI NIC on the starwind server to verify it was indeed capable of 10G traffic - it is.
The host is running a PERC H710 and write back is already enabled.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 06, 2022 2:19 pm

Try connecting the test lun straight into a test VM over the management link, format the disk as NTFS and run the test inside VM.
Post Reply