New setup scenario / sanity check

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Ardie
Posts: 10
Joined: Mon Jul 30, 2018 3:58 pm

Mon Jul 30, 2018 4:13 pm

Hello,

I am new to this product and am jJust looking for any advice or confirmation on this plan. I have been following along as best I can with this guide (https://www.starwindsoftware.com/resour ... erver-2016) however as I only have the one host available at the moment I am not 100% sure on what items I can skip and come back to.

* I have a new server running Hyper-V Server 2016 (192.168.0.20)
* I have an old server running Hyper-V Server 2012 R2 (192.168.0.10)

My goal is to set up the 2016 server, migrate VMs, bring down the 2012 R2 server, and upgrade it to 2016 as well.

The 2016 server has an array of SSD's and an array of HDD's, the storage in the 2012 R2 server will match once I can bring it down and perform the upgrades. I have installed VSAN and configured a device on the SSD array called "SSDStorage", I have configured another device called "HDDStorage" on the HDD array. As I currently only have the one host I cannot set up replication at this time.

1) Is there anything I should be aware of or try to do differently regarding the above configuration?

2) In iscsicpl, I have added only 127.0.0.1 to the target portals, should I be adding the 192.168.0.20 address as well?

3) Both servers only have two ethernet adapters, at this time I have configured only the first NIC with an IP address on the private LAN. I assume I do not need to configure anything for the second NIC until I have the second host operational?

Thanks!
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Mon Jul 30, 2018 9:27 pm

Ardie,

I would say that instead of 2016 in your case it would be better to deploy the second 2012R2, e.g. an evaluation copy, setup a cluster using these two servers, make sure they are operating fine and VMs are clustered with their storage located on the shared cluster shared volumes. After that, the Cluster Rolling Upgrade procedure could help you bringing both servers to 2016 without any need to stop your production. Here is my article on how this can be achieved - https://www.starwindsoftware.com/blog/c ... erver-2016
As for your questions, here are my answers to those:
1. If you have enough volume on your HDDs and SSD altogether to host your VMs (I presume the answer is "yes"), the way I suggested above is valid for you. This way you will be able to completely avoid downtime for your VMs.
2. When the partner host is added, you will need to add its iSCSI connection IP into iscsicpl
3. Although two NICs can physically be more or less fine, the general requirement is having two dedicated links (one for iSCSI and one for Synchronization) in addition to the management link. So I would suggest adding at least one additional adapter to each server. If this is not possible, you will have to use your management network as your iSCSI link, which is not going to be good for data transfer performance.
Ardie
Posts: 10
Joined: Mon Jul 30, 2018 3:58 pm

Mon Jul 30, 2018 10:40 pm

Hi Boris,

Thank you for the information. The idea for 2012R2 on the new host is a good one, unfortunately I already have a VM running on the new 2016 host and there is not enough storage space until the older host is upgraded as there are also VMs coming from a third host which will be decommissioned, and so will need to proceed with the previous migration plan.

As for the NICs, is the heartbeat traffic really that much traffic, or is the concern just that it may have trouble communicating if the management link is saturated? I assume that each host would be prioritizing reading data off the local ISCSI connection and not generally directly be accessing VM data from the other host, is this a correct assumption? If I can anything approved to add, hopefully it will be 10GB cards for each host which I would then use for the replication traffic.

Again as I was following through (part) of the above mentioned guide, I skipped the Witness disk section as I only have the one host available at the moment. For the two host setup, would it be advisable to create a "witness" share on another device (NAS) and use that? I am not clear on where I would place this otherwise, or how placing it on each host would help if one host goes down.

Thanks again.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Jul 31, 2018 5:28 pm

Heartbeat traffic is not heavy at all. It is still possible to combine heartbeat and management networks into one. But the thing is that this is not only about heartbeat. StarWind nodes use 2 kinds of links for transferring data to and from the devices. One type of links is the links used for synchronizing the devices, i.e. for mirroring data across the nodes. This is advised to be a direct connection to avoid any additional SPOFs for the process of synchronizing the StarWind devices. Another link type is iSCSI, or data transfer links. These links are responsible for IO operations between the compute resources of your system and the StarWind devices/targets. We do not advise using iSCSI and synchronization links on the same physical connection. At the same time, it is generally possible to merge the functionality of the management link and the iSCSI one. But here you need to mind that if your management network has got low throughput due to VMs actively using it, this will impact storage IO operations.
For two node setup I would recommend using a simple disk witness as instructed by our configuration guides. One small replicated StarWind disk (e.g. 1GB large) will do the trick. The purpose of this witness disk is assisting the live node in staying online after the other node went down. The nodes inside the cluster will need the majority of votes on operational nodes for the cluster to stay alive. This is why we create that witness disk and make it connected via the loopback interface. This means that in case one node fails (with one vote), the other one (with one vote) will stay operational because it has got connection to the witness disk (+1 vote), thus keeping the majority of votes.
Ardie
Posts: 10
Joined: Mon Jul 30, 2018 3:58 pm

Tue Jul 31, 2018 6:15 pm

Hi Boris,

Thanks again I will be able to add and additional NIC to each of the servers for replication interface.

For the witness disk though, does each server not see its own copy of the witness disk since it is replicated? Does this not give us 4 votes? Sorry I'm just not clear on this part.

Thanks.

Edit: Sorry one last question, If I end up with 2x1Gbps and 1x10Gbps link, I assume I should be putting the Synchronization link on the 10G or the ISCSI on the 10G?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Jul 31, 2018 7:21 pm

For the witness disk though, does each server not see its own copy of the witness disk since it is replicated? Does this not give us 4 votes? Sorry I'm just not clear on this part.
The witness disk contains replicated data indeed, but in the cluster it gives only one vote. There can be only one node owning a disk in the cluster, so it's not 4 votes then, but 3 only. The witness gives its owner an additional vote.
the Synchronization link on the 10G or the ISCSI on the 10G?
I'd use 10Gbps links for Sync in this case.
Post Reply