3 nodes hyperconverged, limited network capabilities

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
sly
Posts: 2
Joined: Mon Nov 25, 2019 1:18 pm

Tue Nov 26, 2019 12:32 pm

We are planning to upgrade our small office virtualization infrastructure to hyperconverged cluster. Currently failover is manual, using Hyper-V replication feature.

I would like a second opinion, if we have understood everything correctly.

There are 3 nodes running free Hyper-V 2016 Core (hardware is HP ML310e gen8, HP ML10 gen9 and Fujitsu RX1330 M4)
We are planning to install Intel X520-DA2 NICs(two SPF+ ports) or, better yet, Intel XL710 quad port (4 SPF+ ports) in all hosts. And connect them using SPF+ direct attach cables and fiber (one of the nodes is in another rack room for safety)

Trouble is, we only have 3 fiber ports available on each end, so we can connect directly either StarWind sync or iscsi/heartbeat, but not both. Which of those are more demanding? The other one would need to be switched through the remaining port.
Also both sync and iscsi/heartbeat would run on the same hardware. Or would it be better to get two, separate 2 port NICs instead of one 4 port? Not sure if all of the servers support two PCI-E 3.0 8x and such approach is more expensive. We could provide additional heartbeat channel via some VLAN through the regular network NIC.

I have adapted a schema to illustrate the situation. Red line represents the gap between rack rooms, which are connected with said 3 fiber lines. Example has iSCSI as direct connections, and sync via switch, but it could be vice versa.
Also green part is redundant or does it provide any benefit?
Limited network 3-node proposal. Image adapted from Resource Library, hope You don't mind.
Limited network 3-node proposal. Image adapted from Resource Library, hope You don't mind.
3-node-limited-network.png (48.49 KiB) Viewed 4112 times
Will all of this work with free Hyper-V Server license?
Will this work with StarWind VSAN Free (which is limited to 3 nodes as I understand)?
Maybe it is cheaper/easier to buy some of-the-shelf storage solution for our setup?
yaroslav (staff)
Staff
Posts: 2279
Joined: Mon Nov 18, 2019 11:11 am

Thu Nov 28, 2019 11:34 am

StarWind recommends using a direct connection for Synchronization and iSCSI channels, but it is also possible to use switches for these types of traffic. Considering that you are short on NICs, you could connect Synchronization channels directly, letting iSCSI/Heartbeat go through 2 switches.
Here’s the recommended schema that should fit your case (the drawing is retrieved from our StarWind VSAN best practices: https://www.starwindsoftware.com/best-p ... practices/).
Here is the setup configuration we recommend
Here is the setup configuration we recommend
what we recommend.png (71.96 KiB) Viewed 4065 times
Dual Synchronization interfaces in your scheme are there for redundancy. Since you are short on NICs, leave only 2 channels for synchronization on each node; here’s how exactly you can design the environment.
And here is how you can build the environment
And here is how you can build the environment
How you could build the environment.png (34.21 KiB) Viewed 4065 times
Such a configuration is pretty risky as you do not have a connection between Node1 and Node3. Sync NICs and links between them are the single point of failure. And, unfortunately, your HPE servers do not allow for adding 2x fiber channel NICs for Synchronization traffic, meaning that you need to use 1 dual-port NIC. Keep in mind that this setup is very unreliable.
If one of your switches goes down, your cluster remains operational because only iSCSI flows through them. Some disks, in turn, remain available to the cluster as they are connected over the local loopback sessions. The cluster remains operational only in case of being configured as the 3node replica; being configured as grid, devices are not connected over the loopback iSCSI sessions. Keep in mind that you also should have redundant management connections to both switches.
The free license seems to allow you to build a 3-node Hyper-V cluster. Contact Microsoft to learn more and to be 100% sure.
StarWind VSAN Free license is limited to three nodes so it should work in your case. But, there’s a trade-off: the free version is community-supported. To put it simply, should any problems occur, you need to write on StarWind Forum. If you need faster SLA response time, you need to buy StarWind VSAN and the corresponding support plan. Please, talk with guys in our sales, they will be glad to assist you with picking the VSAN edition and support plan.
StarWind Software is OK with DIY setups. VSAN works just great in DIY environments (we do not have any HCLs). The recommendations we provide here are to ensure that systems powered by StarWind VSAN work good and fast. If you need a plug-and-play, off-the-shelf solution, we have a hardware offering – StarWind Hyperconverged Appliance. We do the specifications and configuration for you; on top of that, we assist you with live migration. Again, reach out to the sales guys to learn more.
sly
Posts: 2
Joined: Mon Nov 25, 2019 1:18 pm

Mon Dec 02, 2019 1:24 pm

Thank You!
I see, so the sync link is more important to be connected directly, than iscsi/heartbeat links.

As I stated, we also can use the regular 1G NIC for additional heartbeat. Or is this unnecessary/pointless?

Now I understand, that a better approach would be placing all 3 nodes on same rack and manage the "off-site" backups independently.
yaroslav (staff)
Staff
Posts: 2279
Joined: Mon Nov 18, 2019 11:11 am

Fri Dec 06, 2019 3:43 pm

Yes, Sync should go directly. Yes, if it is possible to add one more direct HB connection between hosts. 1GB is enough for StarWind.
Yes, you are right, having the servers in one room is always better as direct connections are better.
Post Reply