Sanity Check please...

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
BenM
Posts: 35
Joined: Wed Oct 24, 2018 7:17 am

Wed Oct 24, 2018 11:08 am

Hi,
before I launch into configuration of my Hyper-V failover cluster with two nodes using Starwind Free to provide the Witness and CSV... could someone please sanity check my proposed setup? better to get it right before installing everything than have to go back and redo things.

Two Dell R620 nodes, 5Tb disk space - two Intel 4 port 1Gb cards plus 4 built in 1Gb NIC ports. I would love a 10GbE card but, unfortunately, that is not possible.

I am migrating VMs from 4 existing hosts (no redunancy, failover or HA at all) which are configured with Virtual switches for vLAN-A and VLAN-B and a LiveMigration network.

So on each new host I am planning

Card1:
if0,1 - Windows Team vLAN-A, Hyper-V Virtual Switch A, ip 10.35.5.x/16
if2,3 - Live Migration, Windows Team LiveMig, ip 192.168.0.x/24
Card 2:
if0,1 - Windows Team vLAN-B, Hyper-V Virtual Switch B, ip 10.34.5.x/16
if2,3 - Windows Team Cluster, Cluster only comms, ip range 192.168.250.x/24
Builtin:
if0 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.200.x/24, node 2 192.168.200.y/24
if1 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.201.x/24, node 2 192.168.201.y/24
if2 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.202.x/24, node 2 192.168.202.y/24
if3 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.203.x/24, node 2 192.168.203.y/24

Witness and CSV provided by Starwind as virtual disks.

I did read in another (similar) post that the heartbeat/iSCSI channel could be used as a LiveMigration channel as well - if that is the case I can repurpose the proposed LiveMigration interface team for other things.

Please do not ask about why there are two vLAN Hyper-V virtual switches - that is a legacy of the network provider (who manages the network infrastructure) and constrained by existing systems. That configuration is unfortunately immutable.

Will the Starwind config work? having studied the config document I think it will; the question here is could it be done better with the existing hardware?

Will it be faster than a fast thing? probably not but that isn't the goal here - I just need something that is fault tolerant with reasonable performance.

Thanks for your time & consideration.

B.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Oct 24, 2018 2:05 pm

I don't see any reason why this should not work.
The only important thing to change in your proposed setup is configuring iSCSI and Sync links on different physical network cards.
For instance, like Card 1 if0 for iSCSI and Built-in if0 for iSCSI, Card 2 if0 and Built-in if1 for Sync. This way, you will be protected against the inbuilt card failure, which in the configuration you proposed will break everything, i.e. both iSCSI and both Sync links.
Feel free to let me know if you need any additional clarification.
BenM
Posts: 35
Joined: Wed Oct 24, 2018 7:17 am

Fri Oct 26, 2018 7:19 am

Great stuff - thanks for the Input

Ben
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Sat Oct 27, 2018 3:56 pm

You are welcome!
BenM
Posts: 35
Joined: Wed Oct 24, 2018 7:17 am

Thu Nov 01, 2018 11:08 am

Ok - So Cluster day arrives....

Running through the instructions - I find myself unclear as to how to connect the Witness disk.

In step 56 of the instructions it says

"NOTE: It is recommended to connect the Witness device only by loopback (127.0.0.1) address. Do not connect the target to the Witness device from the partner StarWind node."

However later on in the instructions it says to connect the Witness disk with Fail Over MPIO - Any MPIO would be hard if you are only connected through 127.0.0.1 as there is only 1 path :)

If I do only connect through 127.0.0.1, when validating the cluster I (understandably) get a warning that only 1 path to the Witness disk exists (CSV validates as expected). There is no mention of this warning in the 'Creating Cluster' steps... is it a real problem or a warning that doesn't matter?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Thu Nov 01, 2018 8:20 pm

The Failover Only policy for the Witness target with 1 connection only will be there in any case. Basically, with one connection it does not really matter whether it is Failover Only, Round Robin or Least Queue Depth. You can keep it as you like it.
When validating the cluster with only one path to the witness target from each node, you will see a red message in the list stating something about MPIO (which is rather obvious in case of one path). Based on our experience, the way we suggest to setup this is more stable compared to the traditional one with connections to both nodes' witness target. You can ignore that warning.
BenM
Posts: 35
Joined: Wed Oct 24, 2018 7:17 am

Wed Nov 07, 2018 1:13 pm

HI Boris

Great stuff - I will ignore the warning message.

Everything seems to be working really well - rebooting either node keeps hyper-v servers running as expected; a great improvement on our previous configuration.

Thanks for your support during the setup - its a great product and we really appreciate being able to use the free product in production (albeit with powershell management only)

Ben
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Nov 07, 2018 2:41 pm

You are welcome. Feel free to post any further questions if you have any.
Post Reply