Unable to live migrate after converting to starwind vsan.

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
chezro
Posts: 8
Joined: Tue Apr 03, 2018 10:12 pm

Thu Apr 12, 2018 7:20 pm

After adding starwind to each of the my 4 hyper v nodes and confirming connections/replication and MPIO I've still not had success live migrating after removing the previous cluster shared volumes.

The original CSV was a freenas solution with multiple disks and disk witness. The new config is the last 3 hard drives in the snapshot:
vsan.png
vsan.png (36.64 KiB) Viewed 6703 times
I moved all the VMS and their configs to the new csv and configured a starwind quorum disk. While the freenas CSV's are still attached and available I am able to live migrate freely between all 4 nodes, the moment I remove even one of the CSV's from the original freenas they lose the ability to live migrate (Quick migration still works). I'm working on the assumption that there's some cluster data still on the old CSV's but I'm not seeing it. I've already set each host to default to the new CSV.

The only other factor here is a veeam installation on another server in that network that I think may be part of the issue but I don't see how that would affect live migration.

Any ideas?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Fri Apr 13, 2018 7:15 am

Is the behavior your described specific to all VMs or just some of them?
Do all of your Hyper-V nodes have the same hardware, particularly CPU? If not, did you enable the checkbox in the VM config's CPU section, which allows to migrate VMs between the hosts having different CPUs?
Is the update level of the Hyper-V hosts the same?
chezro
Posts: 8
Joined: Tue Apr 03, 2018 10:12 pm

Fri Apr 13, 2018 8:34 pm

Update level is now definitely the same and the hosts are part of a esx 4 node block. Still having the issue. I am going to try each vm individually and see if they all fail. I will report back.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Sat Apr 14, 2018 4:01 am

Could you clarify the environment we are talking about? In your first post you mentioned Hyper-V, in the latest one you speak about ESXi. The screenshot shows Microsoft Failover Cluster setup though.
chezro
Posts: 8
Joined: Tue Apr 03, 2018 10:12 pm

Mon Apr 23, 2018 7:42 pm

I meant to clarify the current cluster is on a hyper-v block that USED to be ESXi.

I did identify the problem, the configstorerootpath (SP?) was on one of the old csv's. Had to shutdown the cluster and modify the registry to get it back.

Now the last question on my list is:

Should I replicated to 3 nodes or is 4 ok? I only ask because I've been having syncing issues that I believe have to do with rebooting without shutting down the starwind cluster service (Took me a couple of full syncs to figure it out, still not sure if that's what it was). I'm currently in the middle of a full sync and I'm contemplating going to a setup where 3 nodes replicate synchronously and the 4th as asynchronous for backup. I'd have to disjoin the 4th node and use it like a replica server or the like to not have it go to waste.

Would you recommend I go back down to three nodes or am I missing a step here in order to avoid full sync every time I reboot a server.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Apr 25, 2018 9:50 am

The maximum supported number of nodes for active-active replication scenario is 3. If you are going to use 4 nodes, you might want to introduce grid replication between the nodes or use the 4th node as an async node.
Post Reply