V2V from esxi SLOW

VM image converter (VMDK, VHD, VHDX, IMG, RAW, QCOW and QCOW2), P2V migrator

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
jshorr@gtimports.net
Posts: 6
Joined: Sat Mar 16, 2024 9:19 pm

Thu Mar 21, 2024 8:50 pm

I have been experiencing very slow (max 5MBps) from esxi to esxi, esxi to hyper-v, esxi to azure. No bottlenecks anywhere else in the system or the network, everything flies. Same performance actually with vmware converter. Is there a reason these programs are so slow pulling data from esxi?

Thank you
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Wed Mar 27, 2024 12:05 am

Hi,

That's quite a good remark.
jshorr@gtimports.net, please reach out to support@starwind.com (use this thread and 1130349 as your references).
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Mon Apr 08, 2024 8:52 pm

I'm using V2V 9.0.1.369 and have also been experiencing extremely slow conversion rates.
In addition on the most recent conversions it keeps complaining of communication errors with the server (and halting the conversion), but no other tool shows communication errors. Raising the priority of the task reduced it slightly (from once every 45mins to roughly once every 3 hours). Running a constant ping never shows any interruptions. Network monitoring tools all say all is ok. So this seems to be related to some timing issue internal to the tool which is made worse by it failing without re-trying. Also cancel takes a VERY long time. I told it to cancel the last job after 12 days and only getting 18% of 24tb done. So far it is taken 3 hours to cancel and it is not done yet.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Mon Apr 08, 2024 9:43 pm

Thank you for your update.
May I wonder what are the network and storage settings and what is the conversion scenario for the system?
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Wed Apr 10, 2024 3:10 pm

yaroslav (staff) wrote:
Mon Apr 08, 2024 9:43 pm
Thank you for your update.
May I wonder what are the network and storage settings and what is the conversion scenario for the system?
The network settings are:
DHCP on the PC with Starwind v2v installed.
Static IP on the host which is the source.
Static IP on the host which is the destination.
The PC with V2V installed is a Windows 10 Pro PC and is a VM running on the "destination host".
All are on the same switch.

Both host and target are ESXI 8 not in a vsphere.

The conversion scenario is a cold migration. (server turned off during migration).
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Wed Apr 10, 2024 6:21 pm

Tell me more about the storage on the source and destination (RAID level, Disk type), what is the network bandwidth in between?
Can you please run iperf between the source and destination?
iperf.exe -c 172.16.10.12 -t 150 -i 15

Also, any VLANs on any network on the source or destination?
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Wed Apr 10, 2024 7:36 pm

yaroslav (staff) wrote:
Wed Apr 10, 2024 6:21 pm
Tell me more about the storage on the source and destination (RAID level, Disk type), what is the network bandwidth in between?
Can you please run iperf between the source and destination?
iperf.exe -c 172.16.10.12 -t 150 -i 15

Also, any VLANs on any network on the source or destination?
Both hosts and the v2v PC are on the same vlan. The switch they share is the router (an L3 switch). The switch is capable of 10g, but they are all on dedicated 1g ports.

The storage is raid 6+2. The hosts are both current gen dell servers using their fastest raid controller (perc h730??). The disktype in the source host is NL-Sas with 20+ spindles (so lots of through put). The disktype is the target is SAS with 24+ spindles.
Disk IO is very fast. On par with my desktop other tasks do not show any issues.

iperf returns results of 904. Speeds were pretty consistent ranging from 896 to 911.

Also worth repeating that raising the V2V process from normal to high priority reduced the issue, but did not eliminate it. And, when V2V has an issue it threw an error (popup window) and there is no means to have it auto-re-try 'x' number of times.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Wed Apr 10, 2024 8:17 pm

Thanks for your update.
Also worth repeating that raising the V2V process from normal to high priority reduced the issue, but did not eliminate it.
Could you please tell me more about rising priority? Namely how and where you rise it?
Also, what is the RAID level for source and destination?
Last but not least, does anything else utilize that physical link you were converting through?
V2V Converter is single-threaded (i.e., it can be slow), but many things impact the performance.
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Thu Apr 11, 2024 3:27 pm

yaroslav (staff) wrote:
Wed Apr 10, 2024 8:17 pm
Thanks for your update.
Also worth repeating that raising the V2V process from normal to high priority reduced the issue, but did not eliminate it.
Could you please tell me more about rising priority? Namely how and where you rise it?
Also, what is the RAID level for source and destination?
Last but not least, does anything else utilize that physical link you were converting through?
V2V Converter is single-threaded (i.e., it can be slow), but many things impact the performance.
Raising the a task priority is not secret voodoo.
Open task manager.
Select show more details.
Go to the details tab.
Select the app.
Set the priority from the context menu.
I raised it to HIGH.

Raid level was detailed above. Both are set to raid 6 +2 hot spares. Nothing is IO bound, since this is what you're trying to sort out.

The physical link was not shared. Each host and destination has 6-12 cat6 cables going from the server to the switch. The only common link is the switch, which is a highend HP switch. And as confirmed by iperf there is sufficient bandwidth and it is stable (the degree of variation is less than 10% from the slowest to the fastest).

Single threaded was not the issue since the PC running the migration was single-tasking. (or as close to it as Windows 10 allows).

The issues I wonder about with v2v. I think there is some timing issue since that is the only thing raising the task-priority could improve.
The program needs to have configurable re-tries including automatically responding YES to a dialog box which might pop up as soon as I leave for the day.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Tue Apr 16, 2024 10:49 pm

Greetings,

Thanks for your update. We will look into possible optimizations internally.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Apr 18, 2024 7:31 pm

Could you please let me know to which Azure region you convert?
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Thu Apr 18, 2024 7:51 pm

yaroslav (staff) wrote:
Thu Apr 18, 2024 7:31 pm
Could you please let me know to which Azure region you convert?
I don't know who this was addressed to, but I am not converting to or from AZURE in my comments, but to and from ESXI.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Apr 18, 2024 8:18 pm

Bjorn,

According to the original request
I have been experiencing very slow (max 5MBps) from esxi to esxi, esxi to hyper-v, esxi to azure.
Could you please let me know was the Azure region you used?
Bjorn
Posts: 6
Joined: Mon Apr 08, 2024 8:44 pm

Fri Apr 19, 2024 8:44 pm

yaroslav (staff) wrote:
Thu Apr 18, 2024 8:18 pm
Bjorn,

According to the original request
I have been experiencing very slow (max 5MBps) from esxi to esxi, esxi to hyper-v, esxi to azure.
Could you please let me know was the Azure region you used?
That post was not by me, but by jshorr@gtimports.net.

My use case was only ESXI to ESXI. I do Azure and plan to add Hyper-V, but have not used your tool in those settings.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Sat Apr 20, 2024 4:51 pm

Bjorn,

Sorry. Can you please share the conversion logs with me?
Post Reply