LSFS Production ready?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Tue Apr 28, 2015 11:51 am

L1 cache working great is news to me. It slows our LSFS targets by a factor 5-10 in the current build.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri May 01, 2015 11:23 am

I know that you are working with our engineer Oles on this. We will definitelly address this and if there is something wrong within our software we will fix it ASAP.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
User avatar
frufru
Posts: 19
Joined: Thu Aug 06, 2015 12:53 pm

Thu Aug 06, 2015 1:18 pm

Hi guys,

does anyone still have problems with LSFS or L2 cache? I am considering deploying into a new production environment for aprox 200 Hyper-v VM´s with 99,99% availability.

I read about lot of issues here on forum. Some experienced in my lab.

Is there somebody who have 2-node 10G iscsi, LSFS 20TB with L2 cache and 200 VM´s in production?

Which way to go? Starwind or "old way" with (2x iscsi storage VSA, Equallogic etc.) from stability (nothing else) perspective?

Thank you for your thoughts!

FruFru
MichelZ
Posts: 34
Joined: Sun Mar 16, 2014 10:38 am

Thu Aug 06, 2015 1:19 pm

Old way...
User avatar
frufru
Posts: 19
Joined: Thu Aug 06, 2015 12:53 pm

Thu Aug 06, 2015 1:54 pm

MichelZ wrote:Old way...
Replacing LSFS for FLAT change your opinion?
MichelZ
Posts: 34
Joined: Sun Mar 16, 2014 10:38 am

Thu Aug 06, 2015 2:01 pm

No

It's definitely more reliable than LSFS, but it has its quirks as well.
For example, I'm still having issues shutting down the service without killing it. (Few minutes without any activity on disk or network) and things like this.

If you can choose between Equallogic and Starwind, go Equallogic in my opinion.
User avatar
Oles (staff)
Staff
Posts: 91
Joined: Fri Mar 20, 2015 10:58 am

Mon Aug 10, 2015 1:57 pm

Hello Michael,

I am not sure what issues are you talking about, since our flat *.img works perfect. Please submit a support ticket if you have any, and we will gladly assist you.
mdcollins
Posts: 11
Joined: Fri Sep 11, 2015 3:00 am

Fri Sep 11, 2015 4:05 am

I just had something happen that has shaken my confidence in using LSFS for production. I just rebooted our test server after our L2 cache wasn't performing as expected and we wanted to try different segment sizes, so we re initialized it and reformatted it. Which of course removed the cache files. I wondered if I would have issues with LSFS devices that used the cache as write back (part of our test). With us formatting the SSD logical drive and nuking the L2 cache I wondered if this would break an LSFS devices if all hadn't been flushed from the cache, but I did not expect it to also break any LSFS partitions that used the L2 as write thru. The way I thought I understood it the LSFS when using the L2 cache as write through should untouched by the L2 cache not working other than a performance drop. Is that assumption correct? So is this a LSFS issue or a L2 cache issue? I tried to re attach LSFS devices and not a single one regardless if its L2 cache was set for write back or write thru would load, they all show a status of device type unknown and device state non-active. Am I looking at this wrong? If so please explain how.

Thank You
mdcollins
Posts: 11
Joined: Fri Sep 11, 2015 3:00 am

Fri Sep 11, 2015 5:13 am

Ok I am feeling a more confident about using LSFS. I was able to go in and edit the image file SWDSK for each device and edit out the references to the L2 cache, then restart the Starwinds service and the LSFS for each device came up and passed the mounting process. So apparently if you lose intentionally, by accident or by failure the L2 cache, you can with some editing recover your devices / data. It would be a nice feature upgrade to include the ability of the management utility to read the file and let you edit it from the GUI.

Thank You
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Fri Sep 11, 2015 10:40 am

I have been waiting for caching fixes related to LSFS since April. Supposedly the fixes are in coming soon.
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Sep 16, 2015 8:53 am

yeah, we are all waiting and hoping :D
Vladislav (Staff)
Staff
Posts: 180
Joined: Fri Feb 27, 2015 4:31 pm

Wed Sep 16, 2015 10:10 pm

We are really sorry about the delay in releasing new build. Right now it's at the final stage of testing.
ulaaser
Posts: 4
Joined: Fri Sep 25, 2015 12:45 pm

Sun Sep 27, 2015 2:49 am

Im just in the phase of HA-Design (Hardware and Software Selection) for an important customer who want to run SAP and Active Directory Domain related services on a HA- SYSTEM. The customer has only 25 Users but are Subsidiary of a worldwide company....

While the hyperconverged solution seems very attractive, and should give me advantages over "traditionally" competitors im very concerned about stability of your solution. As i can see there are customers waiting months to see if LSFS becomes stable...

I can make tests and try the Solution about 1 month before putting it in practice beginning December 2015....do you expect to have lsfs production ready, eliminating the actual reported issues?
What about the overprovioning issue... can i expect a reduction to 100% or less in the next release before December?

How fast will the LSFS grow? I am setting up a 2TB CSV with 200GB NVME SSD CACHE. i espect to initially use about 500GB "FAT" virtual disk in Hyper-v Virtual Machines. and then i will only write about 250MB per day (5-8GB Month). So after 1 year i would have written about 600GB in the CSV Volume... must i suspect that after 1 year my CSV is full?
Is there a way to purge the LSFS?
Can i extend resize the CSV Volume online or offline?

What about snapshots and Backups? If i plan to Backup 500GB daily, should i go with incremental or full backups? In the case of a disaster restore using backup.. is there a difference between the two types regarding the LSFS (wich one will use less ressources regarding LSFS disk space utilization).
nohope
Posts: 18
Joined: Tue Sep 29, 2015 8:26 am

Tue Oct 13, 2015 4:08 pm

Looking through your questions I've found some bothering me as well. But, does LSFS-oriented approach is so crucial for you? Are you totally ready to go with overprovisioning restrictions? I ain't refusing all LSFS advantages and, obviously, it has excellent perspectives, however, regular image files are still doing their job flawlessly for me. Also, their performance is pretty good as well, even without any caching enabled (RAID 10, 10k SAS). By the way, what kind of drives do u utilize? I guess, it will shed more light on expediency of using LSFS in your scenario.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Nov 13, 2015 3:54 pm

Hey guys!
Just want to let you know that we have updated the StarWind build on the website, and it should address the issues mentioned in this thread.
Please install it and give it a try.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply