The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
Just looking through the specs, the server looks nice. However, as an engineer, I can't, in all honesty, confirm the spec without more detail. Assuming that low to medium load is a VM with 2 cores and 4Gb RAM, this means that these VMs will need 40 virtual cores and 80Gb RAM. The server you described has 12 physical cores-24 threads-roughly 48 vcores. So after the VMs, you have 16Gb RAM and 8 vcores left, which should be enough for the SQL (either as a role or as a VM). But that's just theorycrafting1. Will I have enough "power" (related to disks and CPUs) to handle both virtualization (more or less 20 virtual machine with low to medium load, and 1 with medium to high -SQL Server-) and storage layer?
For a spindle drives, we recommended RAID-10 with Write-Back as write policy and Read-Ahead as the read policy on the raid controller. As the option, you can test the RAID-50/60 (2 SPANs), but in this scenario, the performance will not be the same as the RAID-102. I was thinking about building a huge unique RAID 10 volume on both servers using 10 disks (I want at least a ready spare per server).. therefore 24 600 GB disks will provide me an usable volume of only 3 TB (3 TB per server, mirrored..); is there something else I could do for saving some space?
You can create the full-size StarWind device on this volume and present this device to the Failover Cluster. Do not forget about the Quorum/Witness device.3. Are there some best practices for sizing HA devices that I'll create inside the 3 TB volume?
The best practice over here will be using both 10 GB NICs for Sync and iSCSI to achieve the best performance.4. Is it ok to use only a single 10 Gbit port per server using a direct connection, leaving the second one for future use (e.g. another node)?
I would recommend using the image files instead of LSFS drives. Yes, it is possible to mix them, but you need be aware of over-provisioning on the LSFS device and other requirements which are described in our KB article here : https://knowledgebase.starwindsoftware. ... scription/5. Is it better to use LSFS or image-based volumes? Is it possible to mix them (e.g. LSFS for the high load VM and images for the low to medium ones)
I mean the Write-back policy should be set on the raid controller. As far as I can see your RAID controller (based on LSI SAS2108) supports Write-back mode.One clarification: did I understand correctly that Write-Back as write policy must be set on StarWind, while Read-Ahead as read policy must be set on the RAID controller?
What about write policy on the RAID controller?
Of course, I would recommend avoiding the non-graceful shutdown if you have L1 (RAM) cache in WB mode on StarWind devices because you will get a Full sync after each non-graceful shutdown.Moreover: may you please better clarify what are the "power availability/reliability" requirements? I mean: how much a StarWind cluster will suffer a power outage?
Is it BETTER to avoid not-graceful shutdown, or is it a MUST?
Depends on patterns. Since the spindle drives are good on sequential writes/reads and not good enough on the random operations.What about expected performance? 10 10k disks in RAID 10 should reach more or less:
1450 read-IOPS
725 write-IOPS
We recommend specifying 1 GB of L1 WB cache per 1 TB of Storage. So if StarWind device will be 2.7 TB I would recommend specify 2.7 GB of RAM cache.And last but not least: with 2.7 TB (10 600 GB disks) of usable data, how much L1 (RAM) cache I should configure on StarWind?
I would recommend using Least Queue Depth or Round-Robin as the MPIO policy. With those MPIO policies, you will receive the better performance.And last but not least: how these performance will change when I'll add the second node (which will require sync of every write)?
I would recommend you to use different one 10 GB port per server for replication (StarWind Sync) and it is highly recommended direct connection for StarWind Synchronization (port-to-port). If you will put iSCSI and Sync on one channel you could have performance issuesiSCSI1 and iSCSI2 will be the two 10 Gbit ports, and they will host StarWind replication traffic too
Basically, you can team those 1 GB NIC and create the Virtual Switch for your VMs on top of Microsoft Teaming.The virtual switch will use all the other 1 Gbit ports, and I'll add two more virtual NICs for StarWind heartbeat
Yes, that's correct. To avoid any issues with sync channel and performance we always recommend connecting 10 GB channels directly and use separated ports for Sync/iSCSIIn this way, I'll have two different 10 Gbit channels.. but I cannot understand what are you advicing me to do with them Do you suggest me to use 1 10 Gbit channel for sync and the other 10 Gbit channel for iSCSI?
If your environment will be read intensive you can use L2 caching. StarWind L2 cache right now working only in Write-Through mode.I read that L2 cache algoritm wasn't so optimized in the past.. what's the situation now? Does it worth to use that now?
Feel free to use this scenario. And yes, it will be "mirrored via network".What about using a single 240 GB eMLC SSD per server (Seagate Nytro XF1230)? Am I wrong or StarWind will mirror them, so "local" RAID could be avoided?
HAdevice will show as "Non-active" on site which has failed drive.What will happen to HA devices and/or L2 cache when one SSD will break?