ESXi to Azure: Extremely Long Convert Times

VM image converter (VMDK, VHD, VHDX, IMG, RAW, QCOW and QCOW2), P2V migrator

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
nathanabshire
Posts: 1
Joined: Mon Feb 21, 2022 8:48 pm

Mon Feb 21, 2022 9:01 pm

Hey there! I recently tried out StarWind V2V for a customer's test migration from ESXi to Azure. Here are the details:

ESXi 6.7.0 Update 3 running on Dell PowerEdge R620
Migrating to fresh Azure tenant with no VMs (this is a Proof of Concept cloud migration attempt)
VM has 2 disks, OS and Data. OS is 120 GB capacity (static), Data is 1 TB (static). Both are VMDK format.
ESXi server has a consistent 500 Mbps upload speed to Internet.

I originally gave my customer an estimate of 8 hours. In reality, it took closer to 40 hours. In fact, it took so long that only the OS disk managed to copy over to Azure, and that alone took just shy of 8 hours. The migration was cancelled since it didn't meet our time expectations.

Is this typical? And, if not, can someone offer up any advice for where to look in terms of bottlenecks?

A member of my team asked if I could investigate the bandwidth between the ESXi server and the app connection that was put in place for Azure. Perhaps the app connection could be the limiting factor? It seems as if my upload speed was almost cut by a factor of 10 here. Real numbers with the upload speed suggest the data disk (1 TB) should upload within 5 hours. I ran this migration for the data disk only for about 15 hours, and the progress was at an unsatisfying 49% (side note... an estimate timer in the converter client would have helped me tremendously over this weekend :P )

Thanks to anyone that can offer up some guidance here!
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Feb 24, 2022 6:23 pm

Greetings,

V2V Converter works in a single thread in its current state.
Storage might be among the possible bottlenecks.
Post Reply