Storage design considerations

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
xy67
Posts: 27
Joined: Tue Jan 05, 2016 9:21 pm

Sat Jul 01, 2017 8:28 pm

Hi All

Been using Starwinds Virtual SAN for 9 months in my lab and am really happy so thank you :D I

I'm going to be doing some changes to my storage shortly and wanted to ask a few design questions before proceeding. Currently my Starwinds Virtual SAN server has 32GB of RAM and an LSI 2308 controller with 4 Samsung SM863 480GB SATA SSD Enterprise drives connected to the SAS ports on the motherboard. The drives are formatted with NTFS and I currently use no RAID. The LSI 2308 is running in IT mode. Current RAM usage is about 22GB for Starwinds process with dedupe and LSFS.

Moving forward I would like to do the following (possibly!):

1) Purchase 4 more Samsung SM863a 480GB SATA Enterprise SSD drives (so I'll have 8 SSD drives for storage)
2) Flash my LSI 2308 with IR firmware (to give me hardware RAID again)
3) Create one RAID10 volume on the LSI 2308 with the 8 SSD drives
4) Format the volume with ReFS (the server has Windows Server 2016)
5) Use LSFS in Starwinds Virtual SAN with write back cache enabled but not sure how big the cache should be?
6) Use dedupe in Starwinds

So if this a good approach? Will 32GB of memory be enough for write cache, dedupe and LSFS? Is RAID 10 a good way to go? This volume will be used to store my vSphere 6.5 VMs and I have 30 VMs so far. Currently each drive is it's own datastore (ie: no RAID) so I'm assuming with 8 drives in a RAID 10 setup that the IOPS/speeds will be pretty good like this rather than running a few VMs on a single drive?

Thanks for reading and any input.
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Mon Jul 03, 2017 7:33 pm

Hello xy67,
In your case I would recommend you to use RAID-5 instead of RAID-10, but if the capacity is not important for you, yes, RAID-10 is the best choice.
If you want to use LSFS with deduplication you need to calculate total RAM memory for LSFS device. For example:
Let's imagine you have an 10 TB of StarWind LSFS device with enabled deduplication. RAMs needed: 7.6 GB for each 1 TB of LSFS, in total - 76 GB of free RAM (just only for dedup). Recommended size of L1 cache is 1 GB per 1 TB, it means we need 10 GB RAM for L1 cache for 10 TB LSFS device. So, in total we need 86 GB of free RAM to 10 TB LSFS drive.
More about LSFS you can read by links below:
https://knowledgebase.starwindsoftware. ... scription/
https://knowledgebase.starwindsoftware. ... -lsfs-faq/
The IOps and speed will be better in RAID, of course.
The other thinks is good.
xy67
Posts: 27
Joined: Tue Jan 05, 2016 9:21 pm

Mon Jul 03, 2017 7:48 pm

Hi Ivan

I'd like to avoid RAID5 so will be using RAID10 for the best performance. The capacity isn't a concern (especially using dedupe).

So if I have 8 x 480GB (447GB usable space) then in RAID10 I would have 1788GB (8*447/2). So in Starwinds using LSFS and dedupe I'd need about 14GB of RAM and I'd need about 2GB of RAM for L1 cache. Is this correct?

Can you comment on ReFS as well please? Should (or can) I use ReFS with my LSFS (dedupe enabled) storage in Starwinds? Or should I stick to NTFS?

Also, if I use over provisioning do I work out the RAM requirements using the OP size of the volume or the actual size of the disk? ie: if I have a 500GB drive but have over provisioned it to be 1TB, do I calculate RAM requirement using the 500GB of 1TB size?

Thank you!
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Wed Jul 05, 2017 4:55 pm

Hello xy67,
Regarding ReFS:
Of course, you can use ReFS instead of NTFS. But, ReFS is a new file system and it can be "unstable". In you case, I would recommend to use NTFS.
If your total size of Underlying storage is 1788 GB we can calculate maximum recommended size of LSFS. 1788/3 = 596 GB (because of overprovisioning in 3 times). So we need to have RAM for dedupe: 60% of 7.6 GB = 4.56 GB and for L1 cache 1 GB (or 596 MB) of RAM. In total we need 4.56 GB + 1 GB = 5.56 GB RAM to your 596 GB. (or 4.56 GB + 596 MB = 5.15 GB)
We need to calculate everything from actual size in StarWind management console
xy67
Posts: 27
Joined: Tue Jan 05, 2016 9:21 pm

Wed Jul 05, 2017 7:12 pm

Ivan (staff) wrote:Hello xy67,
Regarding ReFS:
Of course, you can use ReFS instead of NTFS. But, ReFS is a new file system and it can be "unstable". In you case, I would recommend to use NTFS.
If your total size of Underlying storage is 1788 GB we can calculate maximum recommended size of LSFS. 1788/3 = 596 GB (because of overprovisioning in 3 times). So we need to have RAM for dedupe: 60% of 7.6 GB = 4.56 GB and for L1 cache 1 GB (or 596 MB) of RAM. In total we need 4.56 GB + 1 GB = 5.56 GB RAM to your 596 GB. (or 4.56 GB + 596 MB = 5.15 GB)
We need to calculate everything from actual size in StarWind management console
Thanks for the comment on ReFS.

So after creating my RAID10 volume I'll have 1788GB of usable disk space. So when it comes to creating the LSFS device, do I create it with the size of 596GB or 1788GB? I'm a bit confused with the overprovisioning. I actually want vSphere to "think" it has a 4TB volume presented to it.
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Wed Jul 05, 2017 7:48 pm

Dear xy67,
Thanks for your reply.
Over-provisioning is 200% (LSFS files can occupy 3 times more space compared to initial LSFS size) and to avoid any troubles in the future this we need to create 596 GB LSFS Device.
I am not sure I understand you properly. How do you want present 1.7 TB storage (if you will use flat .img files) as 4 TB storage to vSphere?
xy67
Posts: 27
Joined: Tue Jan 05, 2016 9:21 pm

Thu Jul 06, 2017 6:45 pm

Ivan (staff) wrote:Dear xy67,
Thanks for your reply.
Over-provisioning is 200% (LSFS files can occupy 3 times more space compared to initial LSFS size) and to avoid any troubles in the future this we need to create 596 GB LSFS Device.
I am not sure I understand you properly. How do you want present 1.7 TB storage (if you will use flat .img files) as 4 TB storage to vSphere?
Thanks again for the reply.

Maybe I'm not explaining myself correctly and/or am not understanding something so let me use an example.

Currently in my Starwinds Virtual SAN setup I have 4 x 480GB SSD drives used as shared storage for vSphere but lets keep things simple and focus on a single SSD drive (I use no RAID currently).

On the Starwinds server I have 447GB of usable/formatted disk space on the SSD. In Starwinds console, when I create an LSFS device with dedupe enabled, I create an LSFS device with the size of 1TB (1024GB to be exact) and then set the cache size to 1GB. Then when I create a datastore in vSphere is sees the datastore on the Starwinds server as being 1TB in size. Do you see what I mean? That way I can over provision my VMs. ie: I can create 25 x 40GB thick provisioned VMs in vSphere but on the Starwinds side of things it may only use 150GB of REAL disk space on the SSD due to dedupe.

Currently I have presented each of my 4 x 480GB SSD drives as 1TB drives in vSphere for a total size of 4TB and am using 2TB according to vCenter. On the Starwinds server, if I look in File Explorer, the 4 SSD drives are using only 600GB of real disk space so Starwinds is managing this really well with dedupe.

Hope I'm making sense here. So I'm hoping you can explain to me how I should do this when I have a RAID10 setup? Should I be doing it differently to the above?
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Fri Jul 07, 2017 7:15 pm

Dear xy67,
Thanks for your reply and for the explanation.
Yes, now I understand :) In this case, you can use it, but I would not recommend this scenario to avoid "Read-only" mode on LSFS drive when it will be full.
If available disk space on the physical storage is low, the LSFS uses more aggressive defragmentation policy and slows down access speed for the end user. If there is no space on the physical drive, then LSFS device becomes ‘read-only’.
So, in this case, we need to re-calculate LSFS RAM requirements according to their NOMINAL size. 4 TB LSFS device will require free RAM 30.4 GB for deduplication and 4 GB for L1 cache. In total 34.4 GB of free RAM for LSFS.
Do not hesitate to ask me any other questions.
Thanks!
Post Reply