Setup Questions / Best Practices - All flash, LSFS + Dedup

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
AndrewBucklin
Posts: 4
Joined: Thu Jan 05, 2017 4:16 am

Thu Jan 05, 2017 4:54 am

As they say, "long time listener, first time caller". I stumbled upon StarWind Virtual SAN some time ago and had the pleasure of meeting you guys at the Microsoft Ignite event, but now I'm getting serious and starting to put something together and want to make sure I understand everything before I go out and start recommending the software and selling/supporting it in production environments.

Quick summary: I have two node hyperconverged cluster (just two HP servers with local storage and Microsoft Server 2016). All of the local storage are SSDs. Here's a summary of my setup notes:
1) RAID setup: 2-drive RAID1 for the OS. 6-drive RAID5 and 8-drive RAID5. All drives are SSDs. 64KiB stripe, 32 sectors per track, caching enabled.
2) Servers have a total of 8 gigabit Ethernet ports (4 onboard and 4 on an add-in card). I'm using them as follows by connecting a short Ethernet cable directly between each server (static IPs; not going through switch):
- Onboard #3: iSCSI and Heartbeat
- Onboard #4: Sync and Heartbeat
- Addin card #7: iSCSI and Heartbeat
- Addin card #8: Sync and Heartbeat
3) StarWind vSAN installed and SMI-S as well as both hardware and software VSS modules installed.
4) StarWind.cfg file modified to allow Loopback accelerator driver usage
5) The two RAID5 arrays formatted as GPT NTFS with 4096 sector size via Disk Management utility.
6) Multi-path for iSCSI enabled
7) Creating volumes in StarWind:
- Create new virtual disk on a RAID5 mentioned earlier.
- Select LSFS with deduplication and 4096 bytes sector size.
- Select "write-back" and enter 1GB cache size (I have 96GB of RAM in each server, so I could even increase this number if you think it could help?).
- Select "No Flash Cache" (my environment will not be extremely write intensive, so I don't think I need this, especially with all-flash RAID5).
- Name it "CSV1" and allow multiple concurrent iSCSI connections. Add two-way replica to other server.
- Don't use LSFS RAM Disk option (can I use this? what will it benefit? maybe it will help extend life of SSDs? I have a lot of RAM to spare.)
- Configure network settings checkboxes as mentioned in step 2 above and also enable additional heartbeat on the NIC that is connected to the physical network switch/router.
- Follow same steps to create "Witness" disk that is 1GB in size, Thick Provisioned, 4096 block size, Write-back cache at 128MB, and no flash cache. Also allow multiple iSCSI connections then create two-way replica to other server.
8) Configure iSCSI on both servers (I am pretty sure I did this correctly, so I won't detail it here).
9) In Disk Management utility on first server, bring CSV1 and Witness "online" and initialize using GPT. On CSV1, format as ReFS with 4096 allocation unit size. This format takes about 14 minutes to complete even with 'quick format' checked. On Witness, format as NTFS with 512 allocation unit size.

That's about as far as I've gotten so far and I appreciate you taking the time to read through it. I just want to make sure I'm on the right path but more specifically, I want to make sure I'm using the best settings and parameters for the environment. Additionally, I'm open to ideas for improvement or pointers about things I might be missing or misunderstanding. Basically, I'm ready to learn! Haha. Thanks again; you guys rock!

P.S. When creating a HyperV VM, should I use Dynamically Expanding disk or Fixed size disk? Does it matter since LSFS is doing deduplication?
AndrewBucklin
Posts: 4
Joined: Thu Jan 05, 2017 4:16 am

Mon Jan 09, 2017 9:42 pm

No comment? Maybe either I am doing everything 100% correct (not likely) or I'm a lost cause because I'm completely wrong in every aspect. :cry:
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Tue Jan 10, 2017 11:09 am

Hello Andrew,
Thank you for your interest in StarWind solution.
You can follow the step by step guide here: https://www.starwindsoftware.com/starwi ... -v-cluster to configure StarWind in HyperConverged 2 nodes scenario.
In your current configuration, it seems that NICs will be a bottleneck for whole system performance because SSDs should work faster than 2 Gbps. Thus I would recommend you set three or four NICs on each server for Sync and one or two for iSCSI.
Also, I would recommend you read this post https://www.starwindsoftware.com/blog/l ... a-nutshell and couple of KB articles about LSFS to think through LSFS cons and pros:
https://knowledgebase.starwindsoftware. ... scription/
https://knowledgebase.starwindsoftware. ... -lsfs-faq/
In most cases, even for all flash environment, we use Thick Provisioned StarWind devices, connect them in iSCSI Initiator, format them with NTFS (allocation unit size depend on your needs), so then they can be used by the cluster.
As for LSFS RAM disk option - it is an experimental feature when LSFS device completely locates in RAM. It can be used for temporary/non-critical data.
As for HyperV VM, it depends on your need but Microsoft recommends to use fixed size disks for Production.
RobertFranz
Posts: 1
Joined: Sun Jul 16, 2017 1:21 am

Sun Jul 16, 2017 1:56 am

I would strongly advise 10G nice, especially if you are going part to peer and avoiding the expense of a switch.

Used 10g nice are in the secondary channel in great abundance.
My first 10G nice were freebies included with some cheap used servers I picked up.

10G isn't strictly required by Starwind, especially if you follow the the standard model of each server accessing only local storage in normal operation.

But given how cheap they are now, you get an awful lot of flexibility for little cash outlay.

And they make recovery from failover a much less pressing need.

One thing I rarely see mentioned is processor speed.

Most virtualized systems do have plenty of leftover cpu cycles, which combined with recommendations of many pundits has led people to believe it's all about the core count.

I will happily trade cores for a frequency optimized processor.

Where this is relevant here is that the acceleration you'll see from ram cache is very dependent on processor speed, and eleventy-billion cores do not level the playing field.

My lowly 8 year old I5 utterly crushes my 2.4 ghz dual socket E5 servers. With the efficient use Starwind makes of ram, an I7 2700k cranked to 5ghz will yield acceleration that will make you weep.

And it's one of the few use cases I know of where spending a little extra for fast ram actually delivers returns that justify the expense.

My only problem is that I really don't have a business case for the performance levels I'm getting.

That doesn't mean I'm going to not going to build some insanely performant arrays.

Just don't try this with DL 380 G7's.

The pcie bus can't handle the through put, though it's perversely satisfying to make them cry.

And there is the danger of being consumed with the desire for ever increasing benchmarks.

With Great IOPS comes Great Responsibility
Sergey (staff)
Staff
Posts: 86
Joined: Mon Jul 17, 2017 4:12 pm

Mon Jul 17, 2017 4:33 pm

Andrew, thank you for the detailed reply. It will definitely help all StarWind Virtual SAN users.
Just a quick note about CPU. According to StarWind Best Practices, a multi-core CPU with low frequency cores is preferred compared to one with higher core frequency but fewer cores/sockets. E.g. having a six core Intel Xeon 5660 with 2.8 GHz per core is better than having a four core Intel Xeon 5667 with 3.02 GHz per core.

If implementing StarWind High-Speed caching, an appropriate RAM amount should be installed additionally. The cache-dedicated RAM amount should be equal or higher than one reserved for iSCSI targets caching. 4.6GB of RAM required per 1 TB initial LSFS size (with deduplication disabled) and 7.6GB RAM per 1 TB initial LSFS size (with deduplication enabled). Important notice: Always reserve at least 4 GB RAM for Windows internal processes and the StarWind Virtual SAN engine.
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Wed Jul 19, 2017 8:06 pm

Hello RobertFranz,
Thanks for your interest in StarWind solution.
I am pretty sure your peace of advice would more than helpful for all StarWind users. Thank you!

Dear SergeySanduliak,
Thanks for your interest in StarWind solution.
You absolutely correct, thank you!
Post Reply