super slow performance with free version

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
jsoled
Posts: 5
Joined: Mon Oct 19, 2009 7:56 pm

Mon Oct 19, 2009 8:01 pm

I just installed this today...

Using free version with VMware ESX 4.0. I have two lun's that show up fine in VMware. Just copying files over, deploying a template to a vm, etc.

I'm getting below 1m/sec. Can't quite figure out why. CPU & network utilization are in single digits. I have it on my VirtualCenter (4.0) which has SQL Express on it, but nothing else, and it's on a physical server that runs Windows 2003 Enterprise R2 with 4 hard drives as software RAID 5, and 1 gigabit nic. The ESX hosts have two gigabit nics.

< Part of the message has been removed by Moderator >
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Thu Oct 22, 2009 8:33 am

Did you try iometer? What is the network utilization when read/write operations to target occur? Did you enable Jumbo frames support for your network?

Thanks.
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
jsoled
Posts: 5
Joined: Mon Oct 19, 2009 7:56 pm

Thu Oct 22, 2009 8:44 pm

Network utilization according to task manager is around 7 or 8 % (gigabit).

This is my home lab, so I don't even know whether jumbo frame is an option for me. The ESX hosts didn't seem to have that in the BIOS menu last time I looked. On the Windows box I did turn it on in the driver. But my el cheapo gigabit switch probably does not support it. There's a new 16 port one (different mfg.) en route, so maybe that will help.

I was a bit surprised that this happened. I had another setup that was all inside VMware Workstation and had no performance issues. (Starwind, ESX hosts, etc. were all virtual machines) Definitely no jumbo support here.
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Tue Oct 27, 2009 10:12 am

Vmkernel is a little bit different. It can work much faster than even some physical hardware. If it is possible, please try to use another network switch to test the noetwork performance and utilization. Many SOHO switches proved to be bottlenecks when using StarWind over those. For instance, I've got Linksys WAG16N wireless router at home. Network utilization (even using LAN ports) never goes higher than 15-20%, although, nothing else uses network.

Thanks
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
jsoled
Posts: 5
Joined: Mon Oct 19, 2009 7:56 pm

Fri Oct 30, 2009 3:31 pm

I had some more time to mess with this today.

On my two ESX 4.0 servers I added a new vswitch and made its MTU 9000. I then added a new vmkernel adapter and assigned that to one of the physical nics, modified its MTU to 9000. So now I have jumbo turned on in ESX and a full gig ethernet adapter dedicated to iSCSI on each host.

I do have sort of a soho type switch Netgear GS116 but it says it supports jumbo. This is the one I just bought so I'd have more ports. The PC running Starwind had gig on board and I enabled jumbo there too.

The PC is saying network utilization is below 10%. Still going at a snails pace...

I was thinking of getting a server nic for the PC. Maybe that would help? Dual port ones would let me team the nics. Any suggestions? Hate to spend money on it without knowing.
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Mon Nov 02, 2009 10:55 am

Creating a NIC teaming or an MPIO configuration should help, however, noone can garantee that.

I would suggest using direct cabling for the testing purposes to avoid the scenario where a network switch can cause this bottleneck issue.

Thanks
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
jsoled
Posts: 5
Joined: Mon Oct 19, 2009 7:56 pm

Mon Nov 02, 2009 11:26 am

what I had in mind was exactly what you described... a dual port server adapter and a pair of crossover cables wired to each esx host. will post back in a few weeks once I can get the adapters and run the test.
jsoled
Posts: 5
Joined: Mon Oct 19, 2009 7:56 pm

Tue Dec 01, 2009 12:13 am

I determined that the on-board nic in my vCenter machine (where Starwind is hosted) was jabbering and have disabled it. So now I have the dual port server adapter teamed and connected to the same switch as the ESX hosts. Without this extra traffic it seems a bit faster than before, but still not acceptable. I did set jumbo frames up on both hosts. It takes about an hour to deploy a virtual machine from template.

I will try to further optimize things, maybe break the team up and do some additional setup in the virtual networking. To simplify and troubleshoot I am back down to one VMkernel nic and one vSwitch which uses both physical adapters. I could reconfigure this to have a separate iSCSI vmknic and vswitch, but then my other gig switch will be the bottleneck because it's very low end.
TomHelp4IT
Posts: 22
Joined: Wed Jul 29, 2009 9:03 pm

Mon Dec 07, 2009 6:05 pm

Assuming you have another server or desktop on your network you might want to try adding another Starwind iSCSI target and then mounting that on a different Windows system using the MS iSCSI Initiator. Benchmark that with IOmeter or HDTune etc and see what the performance is like, if its much faster then you know the problem is something to do with your VMware setup. If its still slow then try running the benchmark utility on your Starwind server, first directly on the physical disk, and then try mounting your iSCSI target locally (may sound a bit daft but you'll still be running it though Starwind and the TCP/IP stack without any cabling and switches in the way). Between that lot you should be able to isolate where the bottleneck is occurring.
Constantin (staff)

Fri Jan 15, 2010 2:55 pm

Have you fixed your problems?
Also except 9k JumboFrames you need to apply few registry tweaks. Have you made it?
Post Reply