Thoughts about LSFS

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Sun Sep 17, 2017 7:34 am

Hi!

I am already using Starwind with thick-provisioned volumes, but I could not bring myself to use LSFS.
The reasons for me are (compared to ZFS as my current alternative):

- How can I calculate, how much storage I need to buy?
In ZFS, things are easy: As deduplication is mostly a problem, I buy: StorageNeededForVMs*realisiticCompression-Factor + Space for Snapshots
--> 3TB VMs leads to buy 5TB RAW storage as 3TB becomes about 2TB with compression but MAXIMUM 3TB, 2TB for snapshots, about 1TB reserved

In LSFS there seems to be no upper level for the 3TB of VMs. In your whitepapers, I found "about 2-3 times the space of the volume", depending on the change rate, but what happens, if there is a lot of change e.g. within a migration.


Is there anything, I did not completely understand? How do you handle these points?

Best wishes and thank you
KPS
Sergey (staff)
Staff
Posts: 86
Joined: Mon Jul 17, 2017 4:12 pm

Mon Sep 18, 2017 3:26 pm

Hello KPS! Thank you for your question. The main purpose of LSFS was developed specifically to get maximum performance out of cheap SATA spindles. If your environment has a lot of random writes, that is the place where you want to implement StarWind LSFS. It converts small random writes into big sequential one. That is true, that you need about 2-3 times the space of the volume. That means you need about 3TB physical storage for 1TB usable LSFS volume. Please check this FAQ page about LSFS:
https://knowledgebase.starwindsoftware. ... -lsfs-faq/
KPS
Posts: 51
Joined: Thu Apr 13, 2017 7:37 am

Tue Sep 19, 2017 8:22 pm

Hi!

I am aware of the FAQs and read them many times, but I still do not really know, how you size systems. Do you really always buy 3 times the needed space? Isn't ZFS better in that case? I can fill it up to 80% without performance impact and there is an upper frontier.

I just try to understand the real use-case. In my tests, LSFS-performance was quite good, but if I need 3-times the space, I need to compare the performance of a small (SSD) ZFS system with the performance of a SATA-system with LSFS.

Regards,
KPS
Sergey (staff)
Staff
Posts: 86
Joined: Mon Jul 17, 2017 4:12 pm

Wed Sep 20, 2017 4:40 pm

Yes, we recommend you to allocate 3TB (for example) if you wish to use 1TB usable capacity highly available storage. This ratio differs. Built-in inline deduplication will help you to save space on LSFS device. If you store, let's say 10 equal files on HA storage, 9-10 times less space will be consumed if deduplication enabled. LSFS is very useful when you store many VMs on the HA storage.
LSFS is thin provisioned. Probably, you have noticed, that after you create LSFS device .spspx files are created. These files are snapshot files, and new .spspx files will be created when you write something to HA storage.
Please, note, if you experience the situation, when there is no space for new snapshots, created by LSFS, your HA storage will operate in read-only mode.
Let me guide you to this blog post, where the idea of LSFS is described in details: https://www.starwindsoftware.com/blog/l ... a-nutshell
Give it a test run for some time and I hope you will like the performance.
Post Reply