Sanity Check please...
Posted: Wed Oct 24, 2018 11:08 am
Hi,
before I launch into configuration of my Hyper-V failover cluster with two nodes using Starwind Free to provide the Witness and CSV... could someone please sanity check my proposed setup? better to get it right before installing everything than have to go back and redo things.
Two Dell R620 nodes, 5Tb disk space - two Intel 4 port 1Gb cards plus 4 built in 1Gb NIC ports. I would love a 10GbE card but, unfortunately, that is not possible.
I am migrating VMs from 4 existing hosts (no redunancy, failover or HA at all) which are configured with Virtual switches for vLAN-A and VLAN-B and a LiveMigration network.
So on each new host I am planning
Card1:
if0,1 - Windows Team vLAN-A, Hyper-V Virtual Switch A, ip 10.35.5.x/16
if2,3 - Live Migration, Windows Team LiveMig, ip 192.168.0.x/24
Card 2:
if0,1 - Windows Team vLAN-B, Hyper-V Virtual Switch B, ip 10.34.5.x/16
if2,3 - Windows Team Cluster, Cluster only comms, ip range 192.168.250.x/24
Builtin:
if0 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.200.x/24, node 2 192.168.200.y/24
if1 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.201.x/24, node 2 192.168.201.y/24
if2 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.202.x/24, node 2 192.168.202.y/24
if3 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.203.x/24, node 2 192.168.203.y/24
Witness and CSV provided by Starwind as virtual disks.
I did read in another (similar) post that the heartbeat/iSCSI channel could be used as a LiveMigration channel as well - if that is the case I can repurpose the proposed LiveMigration interface team for other things.
Please do not ask about why there are two vLAN Hyper-V virtual switches - that is a legacy of the network provider (who manages the network infrastructure) and constrained by existing systems. That configuration is unfortunately immutable.
Will the Starwind config work? having studied the config document I think it will; the question here is could it be done better with the existing hardware?
Will it be faster than a fast thing? probably not but that isn't the goal here - I just need something that is fault tolerant with reasonable performance.
Thanks for your time & consideration.
B.
before I launch into configuration of my Hyper-V failover cluster with two nodes using Starwind Free to provide the Witness and CSV... could someone please sanity check my proposed setup? better to get it right before installing everything than have to go back and redo things.
Two Dell R620 nodes, 5Tb disk space - two Intel 4 port 1Gb cards plus 4 built in 1Gb NIC ports. I would love a 10GbE card but, unfortunately, that is not possible.
I am migrating VMs from 4 existing hosts (no redunancy, failover or HA at all) which are configured with Virtual switches for vLAN-A and VLAN-B and a LiveMigration network.
So on each new host I am planning
Card1:
if0,1 - Windows Team vLAN-A, Hyper-V Virtual Switch A, ip 10.35.5.x/16
if2,3 - Live Migration, Windows Team LiveMig, ip 192.168.0.x/24
Card 2:
if0,1 - Windows Team vLAN-B, Hyper-V Virtual Switch B, ip 10.34.5.x/16
if2,3 - Windows Team Cluster, Cluster only comms, ip range 192.168.250.x/24
Builtin:
if0 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.200.x/24, node 2 192.168.200.y/24
if1 - Starwind Heartbeat, iSCSI - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.201.x/24, node 2 192.168.201.y/24
if2 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.202.x/24, node 2 192.168.202.y/24
if3 - Starwind Synch - no teaming, not attached to the Hyper-V networking at all, node 1 192.168.203.x/24, node 2 192.168.203.y/24
Witness and CSV provided by Starwind as virtual disks.
I did read in another (similar) post that the heartbeat/iSCSI channel could be used as a LiveMigration channel as well - if that is the case I can repurpose the proposed LiveMigration interface team for other things.
Please do not ask about why there are two vLAN Hyper-V virtual switches - that is a legacy of the network provider (who manages the network infrastructure) and constrained by existing systems. That configuration is unfortunately immutable.
Will the Starwind config work? having studied the config document I think it will; the question here is could it be done better with the existing hardware?
Will it be faster than a fast thing? probably not but that isn't the goal here - I just need something that is fault tolerant with reasonable performance.
Thanks for your time & consideration.
B.