SLES 12.5 ESXi to HyperV Migraiton Issue (UUID Preservation))

VM image converter (VMDK, VHD, VHDX, IMG, RAW, QCOW and QCOW2), P2V migrator

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

yaroslav (staff)
Staff
Posts: 3084
Joined: Mon Nov 18, 2019 11:11 am

Mon Aug 26, 2024 8:19 am

Please log a call with support@starwind.com use 1204811 and link to this thread as your reference.
microchipmatt
Posts: 12
Joined: Sat Apr 24, 2021 11:35 pm

Sat Sep 21, 2024 5:02 am

I have found a solution that works for me. It turns out the HyperV modules are not automatically activated in SLES 12.5 especially when coming from ESXi/VMWARE. As well, this is a ZENworks Appliance VM, running on SLES 12.5, but the fundamentals still apply, and I have successfully migrated. Here are the steps, in case it helps anyone else:

ZENworks and iPrint Appliance - ESXi to HyperV Migration (Running on Sles 12.5 at its core)

NOTE: The Microfocus/OpenTEXT ESXi and HyperV Appliances are ultimately the same images, HOWEVER, if you use the ESXi image and try to migrate it to HyperV, ONLY the ESXi/VMWARE Virtual Disk Drive DRIVERS are available, and OR if you use the HyperV image, of the appliance and try to Migrate it to ESXi, in both scenarios Migrating from one ESXi/VMWARE→HyperV OR HyperV→ESXi you will find that once you migrate, your Appliance will NOT boot.

This is because the Virtual Drive DRIVERS on both platforms are ONLY enabled for the Appliance flavor you chose. So if you choose the ESXi/VMWARE OVA to deploy the appliance, you will ONLY have the ESXi/VMWARE Virtual Drive DRIVERS enabled. The same goes for the HyperV Appliance, .vhdx file, if you deploy that for HyperV, you will ONLY have the HyperV Virtual Drivers Available, so when you migrate, you will not boot.

Solution: In my Scenario, I want to go from ESXi to HyperV. It turns out the modules are already installed but NOT activated. So we will:

Enable the Hyper-v modules.

Edit /etc/dracut.conf and CREATE a special config file /etc/dracut.conf.d/02-hyperv.conf and enable the drivers in BOTH files (because I actually don’t know which conf file activates the drivers, so I loaded the drivers in both)

Rebuild the kernel/intrimFS with dracut, so that the drivers are activated in the Linux Kernel for the appliance.

Pre-requisite:
Shutdown your PRODUCTION ESXi/VMWARE ZCM 2020.2 Appliance

Use Acronis OR StarWind Migrator to clone or Migrate the machine to Another ESXi Hypervisor. (This is so we are working with a COPY of the VM)

On the new ESXi Instance of the Appliance, make sure the Virtual NIC is NOT connected on PowerOn.

Load YAST2

Change the ip address FROM 10.59.XX.XX to some other free IP address (So we are not interfering with the PRODUCTION server, we can always change it back to the real IP once we have it booting

Once changed, turn off the VM and connect the Virtual NIC

On boot up, you will have internet and LAN connectivity.

Perform the steps below
Install the Hyper-V support modules:
zypper in hyper-v
(they are already installed, so just move to step 2, I left this in in case you use a version of sles 12.5 or the appliance where they are not enabled.

Enable the Hyper-V support modules:
systemctl enable hv_vss_daemon.service
systemctl enable hv_fcopy_daemon.service
systemctl enable hv_kvp_daemon.service

Edit /etc/dracut.conf and change the add_drivers line to this:
add_drivers+=“hv_vmbus hv_netvsc hv_storvsc”

4. Add a driver load file IN /etc/dracut.conf.d (A directory), to load the HyperV drivers as well, just in case (The appliance seems to load from these files)
cd /etc/dracut.conf.d
vi 02-hyperv.conf
Hit “i” fir insert
Add the line: add_drivers+=“hv_vmbus hv_netvsc hv_storvsc”
Save the file with :wq
Make sure there are NO syntax errors
5. Run the following command to rebuild initramfs:
dracut -f -v
6. Now that the drivers are injected, re-migrate or CLONE the machine from ESXi to HyperV VM VIrtual Drives (Starwind or Acronis)
7. I would highly suggest bringing up the Machine on a PRIVATE HyperV network switch so that ONLY another Windows VM on the HyperV server can access your server.
8. The machine should boot now, if it does, as long as it is on the Private HyperV switch, and can only talk to other machines on that Private HyperV switch (LAN access and NO Internet Access), then Change the IP back to the ZCM Appliance original IP in our case it is 10.59.XX.XX
9. Make sure you can ping other devices on the Private Virtual Switch
10. Reboot the ZCM 2020.2 Appliance
11. From the Windows workstation on the same Private HyperV Switch, make sure you can ping the ZCM Appliance at 10.59.XX.XX, if you can, test EVERYTHING now, the web interface, make sure all data is there, make sure its making an LDAP connection, make sure, you can remote machines, make sure you can install and roll out bundles, test MDM functions.
12. If it all looks good, your migration should be complete!!!!!
13. Shut down your Migrated HyperV ZCM 2020.2 Appliance, and change the Virtual NIC to be on your PUBLIC/REAL LAN Virtual HyperV switch. Bring the server back up. You are live now. TEST. Make sure PXE is working live on the network as well as imaging :)
yaroslav (staff)
Staff
Posts: 3084
Joined: Mon Nov 18, 2019 11:11 am

Sat Sep 21, 2024 7:28 am

Hi,

Thanks for your update. Drivers are a pitfall where most cross-hypervisor conversions fall. How bad things might be depends on the OS.
Thanks for sharing your experience. I believe the method can be scaled beyond. your case.
Post Reply