Poor StarWind performance - what am I doing wrong?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
softmaster
Posts: 24
Joined: Sat Jan 07, 2017 10:30 pm

Sat Sep 23, 2017 12:46 pm

Hi
Spent a lot of time with different performance tests of StarWInd based iSCSI target in virtual Esxi based environment (ESXi 5.5 and 6.0) with different configurations.
Tested both Microsoft and Vmware iSCSI initiators. Tested Linux and Windows based StarWind storage. Tested physical Infiniband 20Gb/sec network and virtual Vsphere VMXnet3 based connection inside the same Vsphere host.
Unfortunately have to note that StarWind based iSCSI target performance is far from optimal. Despite very fast physical storage (I tested NVME PCiE disks, RAID0, RAID10 and RAID60 based on 20 SAS 12GB/sec HDD drives, even RAM disk) - Starwind based iSCSI storage was able to transfer to iSCSI initiator less then 30%-50% of really possible physical throughput. I mean maximally possible speed of sequential access. I believe it's a very important parameter, that shows efficiency of iSCSI layer. In real life this speed very important for backup/recovery procedures and for bulk read/write procedures. And if 10%-20% lack in Random Read/Write is unpleasant but not lethal, but 48 hour size of backup/recover time window instead 6 hours means downtime of whole business for 2 days instead inconspicuous night procedure.

Just few numbers (This is result of the single StarWind node. Based on my real tests, for dual node StarWind system read speed will be close to 200% and write speed is about 80%-90% of single node speed) :

Direct speed of NVME PCIe based disks:

Sequential Read (Q= 32,T= 1) : 3034 MB/s
Sequential Write (Q= 32,T= 1) : 2535 MB/s

Starwind based iSCSI target on same NVME PCIe disks:

Sequential Read (Q= 32,T= 1) : 610 MB/s
Sequential Write (Q= 32,T= 1) : 730 MB/s


Direct speed of RAID0 based on 20 HDD 12 GB/sec SAS HDDs.:

Sequential Read (Q= 32,T= 1) : 3340 MB/s
Sequential Write (Q= 32,T= 1) : 3750 MB/s

Starwind based iSCSI target on same RAID0:

Sequential Read (Q= 32,T= 1) : 615 MB/s
Sequential Write (Q= 32,T= 1) : 590 MB/s



Direct speed of RAM drive:

Sequential Read (Q= 32,T= 1) : 7240 MB/s
Sequential Write (Q= 32,T= 1) : 5315 MB/s

Starwind based iSCSI target on same RAM drive:

Sequential Read (Q= 32,T= 1) : 940 MB/s
Sequential Write (Q= 32,T= 1) : 815 MB/s

I have to note that tested another iSCSI SDS products on the same environment and got much better performance. I do not think it's correctly to name competing products on this forum, but I have to note that results I got on another products was much better.

Many times I tried to discuss the problem with StarWind support specialist, but unfortunately did not meet any understanding and concern about the problem.
Sergey (staff)
Staff
Posts: 86
Joined: Mon Jul 17, 2017 4:12 pm

Tue Sep 26, 2017 10:10 am

Hello, softmaster! Thank you for your question.
As I understand you are using one standalone server and StarWind is installed on the virtual machine inside your ESXi environment. Let me give you few bits of advice that should boost performance on the iSCSI connected device.
1) Please, make sure, that virtual disk, connected to the virtual machine with StarWind installed is Thick Provision Eager Zeroed. We strongly recommend converting .vmdk to eager zero, if required. You can do it according to this guide:
https://kb.vmware.com/selfservice/micro ... Id=1035823
2) Yes, you have very fast physical storage, though the virtual disk is connected over a network interface, which might be a bottleneck. I think it would be a good idea to measure the performance of the network interface used for the iSCSI connection. You can use iPerf for this. ~700MB/s sequential write equals about ~5.5Gb/s network bandwidth.
3) Try to re-run the test from 2 VMs (located on the iSCSI connected virtual disk) simultaneously.
4) Change Disk.DiskMaxIOSize value to 512, according to this guide: https://kb.vmware.com/selfservice/micro ... Id=1003469
5) When creating StarWind device in "Add Device" wizard, you can choose block size, which can be 512 or 4096 bytes. We recommend using 512 bytes block size in ESXi environment. You can check "Virtual Disk Sector Size" value for existing StarWind virtual disks in StarWind Management Console.
6) Please, note, StarWind adds additional virtualization layer, so 10-15% performance drawback is considered acceptable.
Post Reply