Cache and network questions / 3-Node Hyper-V Cluster

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
g0drealm
Posts: 1
Joined: Tue Oct 30, 2018 3:25 pm

Tue Nov 06, 2018 12:38 pm

Hello everybody,

I'm currently testing Starwind VSAN in a 3-node Hyper-V cluster for our new ERP system (4 VMs, 20 users). Actually, I'm a trained Linux administrator and more familiar with Ceph and Proxmox than with Windows Server :D

I have successfully set up the cluster based on the Starwind Technical Paper "Hyperconverged 3-Node Scenario with Hyper Cluster" and have some questions regarding the best possible configuration.

The 3 servers look like this:

- 2x Intel Xeon E5-2620v4
- 64GB DDR4 RAM
- (LSI / Avago) MegaRAID 9361 with cache vault module
2x Intel 240GB SSD for Windows Server 2016 Raid1
- 4x 2TB SAS III HGST 3.5 "7.2 Raid10 / 4TB Cluster Storage
- 2x 2 TB SAS III HGST 3.5 "7.2 as a spare
- 1x Intel 10 Gigabit X550-T2 RJ45 Dual Port
- 2x onboard 1Gbit
- 1x I350 dual port 1Gbit network card

Questions:

1) I have the 4 Gbit ports currently configured in the LACP team and connected to our stacked core switch. The LACP team takes over the function of the vSwitch and at the same time heartbeat and ISCSI.
Should this be better separated or can it stay that way?

2) I do not know exactly which cache settings I should choose or what brings the best performance. Hardware or SW cache? Would it be an advantage to sacrifice a spare drive and, for example, use a Samsung EVO 860 Pro as an L2 cache for 4TB storage? Would the performance be increased by this?

Thanks in advance for your help

Best regards,

g0drealm
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Nov 06, 2018 2:00 pm

1) I have the 4 Gbit ports currently configured in the LACP team and connected to our stacked core switch. The LACP team takes over the function of the vSwitch and at the same time heartbeat and ISCSI.
Should this be better separated or can it stay that way?
No, we do not recommend any form of NIC teaming for ISCSI traffic. You can configure LACP team from 2*1Gbit NICs for management and two separated vSwitches for ISCSI traffic.
2) I do not know exactly which cache settings I should choose or what brings the best performance. Hardware or SW cache? Would it be an advantage to sacrifice a spare drive and, for example, use a Samsung EVO 860 Pro as an L2 cache for 4TB storage? Would the performance be increased by this?
You can find recommended caching settings for RAID array by following this link. You can add more drives to RAID 10 array in order to get a better RAID array performance.
In StarWind VSAN, L2 cache operates in write-through mode only, which means it boosts read operations only.
Post Reply