VMware Linux VSA

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

link855
Posts: 11
Joined: Thu Dec 10, 2015 1:08 pm

Fri Nov 16, 2018 5:40 pm

That's perfect, many thanks Oleg! 8)
Below the commands with fixed paths for my VSA version:
ln -s /mnt /opt/StarWindVSA/drive_c/StarWind/storage/mnt
ln -s /media /opt/StarWindVSA/drive_c/StarWind/storage/media


Thanks for your support, regards!
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Sun Nov 18, 2018 6:52 am

Great to know your issue has ben resolved.
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Wed Nov 28, 2018 7:49 am

Hi!

Did you do any benchmarks (perhaps on the same hardware) to compare the performance of:

- Linux VSA
- Starwind installed on Windows

Regards
KPS
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Thu Nov 29, 2018 9:30 am

Hi!

I did some benchmarks:

Server 1:
Client-System, Windows 2016

Server 2:
Starwind as iSCSI-Server
6x Intel 4510 SSD, Raid 0

Connection: 2x 10GbE with Jumbo Frames


Test 1:
Installed Hyper-V on Server 2.
Benchmark DiskSPD - 8k, 50% write --> 207.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 192.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 183.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 52.000 IOPS

Test 2:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system
Benchmark DiskSPD - 8k, 50% write --> 48.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 62.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 27.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 6.000 IOPS


Same settings in Starwind (no cache, as the array is faster than the cache).


Linux VSA seems very nice, but VERY slow compared to the Windows-Version of Starwind.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Thu Nov 29, 2018 10:01 am

Could you please give more details?
Did you connect drives to VSA as eager zeroed VMDK or as RDM/Passthrough?
Please try with RDM/Passthrough.
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Thu Nov 29, 2018 11:27 am

Hi!

I am using "eager zeroed".
Is it possible to use passthrough, if the same raid-controller hosts the boot-drive and the data drive?
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Thu Nov 29, 2018 11:44 am

Yes, it is possible.
You can put VSA VM to boot drive and RDM data drive to this VM.
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Thu Nov 29, 2018 1:24 pm

Shame on me... The benchmarked VDISK was located on the boot-raid.

So corrected benchmark:

Test 1:
Installed Hyper-V on Server 2.
Benchmark DiskSPD - 8k, 50% write --> 207.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 192.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 183.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 52.000 IOPS

Test 2:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system, via VMDK, eager zeroed
Benchmark DiskSPD - 8k, 50% write --> 94.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 141.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 59.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 29.000 IOPS


Test 2:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system, via VMDK, lazy zeroed
Benchmark DiskSPD - 8k, 50% write --> 80.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 134.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 60.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 30.000 IOPS


Test 3:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system, via RDM
Benchmark DiskSPD - 8k, 50% write --> 87.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 153.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 62.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 44.000 IOPS

--> Write performance is much better on Windows than on Linux VSA
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Fri Nov 30, 2018 8:10 am

Hi!

I think, the performance is sufficient for most cases, but with full-flash, Starwind-Windows-VSA seems to perform the Linux VSA out, or did you get higher performance with Linux VSA in your tests?
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Fri Nov 30, 2018 9:08 am

Actually, the performance, according to our tests, should be the same.
Since you are running synthetic tests, you can try to add more CPU resources to StarWind Linux VSA.
Also, as you are using the All-Flash array, you can try to use recommendations from this blog:
https://www.starwindsoftware.com/blog/s ... figuration
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Fri Nov 30, 2018 9:25 am

Hi Oleg!

I just assigned more CPU-cores to the VSA and disabled CPU power-savings completely. Performance is better, but not as good as with Windows-VSA:

Test 4:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system, via RDM, 8 CPU-Cores, no power-savings
Benchmark DiskSPD - 8k, 50% write --> 99.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 144.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 76.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 36.000 IOPS

Perhaps, I can do another test with PCI-passthrough, but at the moment, I can't because the ESXi-boot-drive is on the same Raid-Controller, as the data-drive.


About the blog-entry:
Do you really recommend not to use a hardware-raid-controller? My next step would be to pass the hardware-raid-controller to the VM...
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Fri Nov 30, 2018 1:08 pm

Test 5:
Installed VMWare 6.7 on Server 2 and Linux VSA on the system, PCI-Passthrough of raid controller (LSI 3108)
Benchmark DiskSPD - 8k, 50% write --> 100.000 IOPS
Benchmark DiskSPD - 8k, 0% write --> 189.000 IOPS
Benchmark DiskSPD - 8k, 100% write --> 75.000 IOPS
Benchmark DiskSPD - 64k, 50% write --> 47.000 IOPS

--> Great read-performance (as on windows), bad write performance (-60%)
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Fri Nov 30, 2018 4:08 pm

Did you add noatime mount option?
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Sat Dec 01, 2018 8:40 am

Hi!

Yes, it is mounted with option "noatime". The only difference to your manual is, that I am using a Raid-Controller - not a HBA --> no Linux software-raid.

One other difference:
Although cache is deactivated, the Linux VSA seems to need a "warmup": If I start the benchmark multiple times, the first run is always slower than the rest. This is not the case with Linux VSA.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Wed Dec 05, 2018 11:00 am

Hi!
Could you please submit a support case using this form?
Post Reply