Page 1 of 1

VirtualSAN Raid Controller best practices

Posted: Tue Apr 21, 2015 9:58 pm
by eblend
Hello,

I am setting up a small lab environment using virtual san v8 on an older Dell 2950 with Perc 5i to be used as my iSCSI target. I will be using the new LSFS file system with L2 cache on an SSD with de duplication, thin provisioned.

The server will present iSCSI to an ESXi host with various VM workloads.

My 2950 has a Perc 5i controller on it and I have 6 x 450gb SAS drives. I read about the benefits of LSFS and am struggling to understand the settings on my raid controller that would result in the best results for performance.

I am thinking of running RAID5 on the controller, it appears write back cache is a good option for the write cache, but not sure about read cache. Does LSFS take advantage of read ahead cache or not? My option are read ahead, adaptive read ahead, and disabled.

Also, there is mention of stripe sector size, with option from 8k up to 128k. Reading online, it seems LSFS dedupe block sizes are 4k, so not sure what I should set this to? Any recommendations?

Thank for the help.

Re: VirtualSAN Raid Controller best practices

Posted: Thu Apr 23, 2015 1:04 pm
by Oles (staff)
Hello Eblend!

Speaking of Stripe Size, the performance of a RAID array directly depends on the Stripe Size used. There are no exact recommendations of which stripe size to use. It is a test-based choice. As best practice we recommend at first step to set value recommended by vendor and run tests. Then set a bigger value and run tests again. In third step set a smaller value and test again. These 3 results should guide you to the optimal stripe size value to set. In some configuration smaller stripe size value like 4k or 8k give better performance and in some other cases 64k, 128k or even 256k values will give better performance.
Performance of the HA will depend on the performance of the RAID array used. It’s up to the user to determine the optimal stripe size.
Each device takes advantage of cache, our company does not have recommendations on read ahead cache in case of LSFS, but my personal recommendations on that would be setting " read ahead ".

Please note that:
1) LSFS requires 3 times more space. For example: 1TB device, requires 3TB of space
2) LSFS device with dedup disabled requires 4GB of RAM per 1TB
3) LSFS device with dedup enabled requires 7GB of RAM per 1TB