Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Sun Mar 01, 2015 10:38 pm
We are redeploying our 4TB HA license as v8 from v6 on different hardware. In our testing LSFS with deduplication had much greater performance than the existing installation. I have noticed on the forums some concerns with LSFS taking up two much disk space, taking forever to start a node after reboot (creating?), getting stuck on resync, etc. Having used StarWind since v4 I am scared to use new features earlier in their release cycle.
Do the benefits of LSFS come at the cost of reliability? Should we use dedup?
Thanks!
Kurt
-
fbifido
- Posts: 125
- Joined: Thu Sep 05, 2013 7:33 am
Mon Mar 02, 2015 4:33 pm
Can you give a little detail on the testing you did, including your hardware spec and setup ?
Thanks.
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Mon Mar 02, 2015 4:38 pm
Testing was two HP servers running 2012 R2 with multiple clustered LSFS dedup targets connected to Windows Hyper-V hosts running a myriad of low cluster size random and large cluster size sequential iometer and atto tests.
-16 cores per host
-20GB of RAM per host
-RAID5 and RAID0 SSD (just to compare the two in performance)
-Two 10Gbe crossover sync and heartbeat paths
-Two 10Gbase-T iSCSI paths through Nexus 5K and 2K fabric
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Fri Mar 06, 2015 4:13 pm
quick double-check - so you are running with All-Flash, correct?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Fri Mar 06, 2015 4:37 pm
RAID5 is disk, RAID0 is flash. We were going to put the LSFS on the spinning disk and standard thick images on the SSD's.
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Mar 10, 2015 5:57 pm
That sounds like a good plan!
You may consider using part of Flash as L2 cache actually.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Tue Mar 10, 2015 6:40 pm
Anatoly (staff) wrote:That sounds like a good plan!
You may consider using part of Flash as L2 cache actually.
As part of the spinning disk LSFS targets? If so how much?
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Mar 10, 2015 7:19 pm
Usually we recommend the L2 Flash cache size would be ~10% of the HA storage size (i.e. 100Gb for 1TB).
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Tue Mar 10, 2015 9:18 pm
Do you recommend Write Back or Write through on L2 SSD cache. The SSD's are Samsung 845DC EVO's rated at 30% of total drive size writes per day.
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Thu Mar 12, 2015 4:22 pm
We recommend Write-Through mode for L2
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Mar 17, 2015 7:03 pm
Well, it works using LRU (Least Recently Used) algorithm, which basically discards the least recently used items first.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Tue Mar 31, 2015 8:31 pm
The build posted today mentions issues with L2. Should I stay away from it for now? Also, is dedup safe now?
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Wed Apr 01, 2015 2:45 pm
Upgraded to the latest version and created a new LSFS Dedup HA target. Write performance went down to 200-300 iops for 4K random/sequential write mix. 64K pure sequential write performance dropped to 10MB/s. Those are unusable numbers.
Read performance is still 10000 iops for 4K mix and 1700MB/s for 64K sequential read.
Is this a result of LSFS changes or is something wrong with my deployment?