New vSAN cluster / design considerations

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Masamune
Posts: 1
Joined: Fri Sep 20, 2019 12:15 pm

Fri Sep 20, 2019 12:20 pm

Hi Starwind Forum,

I'm looking for a new serversolution for a small customer (~15 VMs; 1,5TB Data; normal Windows environment, nothing special).
The customer wants some sort of high-availability and I would like to update the servers during normal business times so using two Servers and mirror everything with Starwind vSAN seems a good idea.

Design considerations:
- I would like to use Microsoft Storage Spaces so I don't need an extra raid controller card
- Storage should be flash only as we don't need much space and it's relativ cheap these days
- I thought about using Intel Optane SSDs as write back cache for the virtual disks and "normal" SSDs configured in parity mode as storage. Not that I would need performance like this but I like the use of Optane as write buffer with 10DWPD. It should be slightly more expensive than just using two 4TB SSD in mirror mode but pays of as soon as I need to expand the storage.
- iSCSI synchronization NIC could be connected directly from host to host, so no switch involved
- physical backupserver could be used as witness service

I came up to the following hardware for my needs:
- 2 server nodes with Windows Server 2019 as Hyper-V hosts
- Single 16-core CPU
- 256GB of RAM (VMs need about 125GB today)
- 2x Intel Optane (cheapest that are available)
- 3x 2TB Intel or Micron SSD (so around 4TB usable capacity with parity configuration)
- Dual 10G Ethernet Adapter (one Port for Starwind synchronization, other for frontend Traffic)
- Dual Gigabit Onboard (for iSCSI Heartbeat)
- Starwind vSAN for Hyper-V Standard Edition

I should be under 10k for one server which totally would be in the budget.
Another physical host with Veeam Backup, 10G switches and two UPS are already there.

A few questions:
- Can I connect all three servers (2 Nodes and backupserver with witness) directly via gigabit for the heartbeat? So two ports of each server connected directly to the other servers without any switches?
- With Intel Optane as write cache would you still enable the software cache?
- Can I mount the iSCSI volume(s) to the backupserver to backup the VMs using Veeam Backup?
- Is this whole concept a good idea or would you change something fundamentally?

kind regards
Michael (staff)
Staff
Posts: 317
Joined: Thu Jul 21, 2016 10:16 am

Wed Sep 25, 2019 1:16 pm

Hello Masamune,

Design considerations:
I would like to use Microsoft Storage Spaces
Please remember that simple Storage Spaces have no redundancy in case of a drive failure.
- iSCSI synchronization NIC could be connected directly from host to host, so no switch involved
- physical backupserver could be used as witness service
It is not clear which StarWind failover strategy you are going to use. Here is a description: https://www.starwindsoftware.com/help/S ... egies.html
I believe the best option would be assigning dual 10G interfaces for Synchronization and partner ISCSI traffic (connect directly) and use Heartbeat failover policy. No witness node is required.

Regarding your questions:
- Can I connect all three servers (2 Nodes and backupserver with witness) directly via gigabit for the heartbeat? So two ports of each server connected directly to the other servers without any switches? - technically it is possible, but I would recommend going with Heartbeat policy.
- With Intel Optane as write cache would you still enable the software cache? - I believe it's not necessary.
- Can I mount the iSCSI volume(s) to the backupserver to backup the VMs using Veeam Backup? - see no issues here.
Post Reply