Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)
-
gorb
- Posts: 9
- Joined: Thu May 07, 2015 11:16 pm
Thu May 07, 2015 11:23 pm
I am in a tight situation. I need to provide 60TB of storage and I only have a chassis with 36x 3TB disks. I would prefer to do Raid 10, but I won't have enough space for the data. We will be using Windows Server 2012 R2 and hope to use SBM 3.0. Should I got with Storage Space or Raid 6 or even wait until I have money available to buy JBODs and do it right? I need about 6k IOPS. Additionally, I have a 192gb of ram available in each host. What is the best way to use that RAM? Can I create a ram disk for L2? Should I buy some SSDs instead? Thanks for your help.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Fri May 08, 2015 2:22 pm
What's StarWind Virtual SAN doing in this equation? Do you happen to have any interconnect diagram?
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
gorb
- Posts: 9
- Joined: Thu May 07, 2015 11:16 pm
Fri May 08, 2015 3:12 pm
we have 14 hyper-v hypervisors connected to a IB switch and then 2x of these SMC storage chassis on IB as well that will most likely be running as SOFS w/ Starwind v8 and SMB 3.0 in a HA configuration.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Fri May 08, 2015 4:45 pm
So you use StarWind do do a back-end for SoFS right? If so you'd better stick with hardware RAID6 on every SoFS node. Storage Spaces are OK but $200 RAID would give you better performance (especially with NVRAM cache). Dell PERC.
gorb wrote:we have 14 hyper-v hypervisors connected to a IB switch and then 2x of these SMC storage chassis on IB as well that will most likely be running as SOFS w/ Starwind v8 and SMB 3.0 in a HA configuration.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
gorb
- Posts: 9
- Joined: Thu May 07, 2015 11:16 pm
Fri May 08, 2015 9:05 pm
The SMC chassis have LSI 2108 addon cards with 512MB cache and BBU. Why do you said it's better to do raid 6 than raid 10 with a SOFS?
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Sat May 09, 2015 1:48 am
I did not say that! RAID10 is better in most of the cases. But expensive.
gorb wrote:The SMC chassis have LSI 2108 addon cards with 512MB cache and BBU. Why do you said it's better to do raid 6 than raid 10 with a SOFS?
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
gorb
- Posts: 9
- Joined: Thu May 07, 2015 11:16 pm
Sat May 09, 2015 10:14 am
Okay, that makes more sense. So doing Raid 6 is not the end of the world, but Raid 10 would still most likely be the best way to go. We're at about 75% Read and 25% Write. Also, how much of the Ram would be ideal for use as L1 Cache? And what SSDs would you suggest and how many is optimal? Thank you!
-
gorb
- Posts: 9
- Joined: Thu May 07, 2015 11:16 pm
Mon May 11, 2015 1:54 am
Okay I've done some reading over the weekend. It looks like for ram you suggest doing 1GB Ram for 1TB of storage. So we have around 60TB of data so 60GB of Ram. However, the storage servers have 128GB in it. If we give 16 to the OS can we just give the rest to Starwind's L1 Cache? As for the SSD it would seem that for 60TB of storage we want to do 8-10 TB of SSD? Give that this is going to be primarily for Hyper-V VM storage is that investment worth it? We're at about 75% write and 25% read.
Also I noticed we have licensing for 3 units with unlimited storage. Since we have 2 units in Replication can we run our backup software on top of the 3rd VSAN server and just run DPM and Amanda Backups to it?
Lastly, Do you have deployment documentation for all of this and anything on SMB 3 and SOFS?