Page 1 of 3

ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 02, 2013 2:13 pm
by Tijz
Hi all,

I suppose i'm not the only one posting performance issues. I really did try a search, but couldn't find anything usefull.

So, here it goes.

I have a ESXi 5.1 box configured with a software iSCSI adapter (bonded to a Intel T340 (82580) NIC)
Starwind is installed on a physical box with Windows 2008 R2 SP1.
I created a standard virtual disk target on a RAID 5 array.

Using the vSphere client I copy a VM to the datastore, I get about 150Mbps (18MB/s) to the Starwind server.

I also have another physical box with windows 2008 R2. Using the MS iSCSI initiator I connected to the same target, formatted as NTFS, and copied an ISO image. Now I get 800Mbps!
Also using CIFS to the same box (and to the same RAID 5 volume) I get almost 1Gpbs throughput.

Why is vSphere so slow? Is there something basic I forgot?
Switch statistics don't show any errors or collisions. Jumbo frames enabled or disabled doesn't change anything. (I configured jumbo on windows NICS of the starwind server, the switch and the vSwitch and vmkernel port).
I didn't even bother with jumbo frames with the MS iSCSI initiator test which ran super fast.

Any idea's? Thanks in advance.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 02, 2013 2:25 pm
by anton (staff)
Sounds like misconfiguration issue. Start with tuning up network paramters on both sides using this guide:

http://www.starwindsoftware.com/forums/ ... t2296.html

Make sure you have TCP from / to Windows box doing close to wire speed.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 02, 2013 2:40 pm
by Tijz
Thanks, of course I found this sticky post.

But it offers mostly tweaks for the Windows side of the configuration. But as I mentioned, using another initiator than my ESX box, i am getting great performance.
So, i might be a bit stubborn, but I don't seem to have a problem with the configuration on the Starwind box.

The sticky post doesn't mention any specific configuration for ESX.
I don't use NIC Teaming (also no using it on the windows box) and I have bonded the vNic to the iSCSI storage adapter. Also, without bonding any vnic I get the same poor performance.


From a windows VM on the same ESX box, I get great performance using CIFS writing to the starwind box.
Allthough i'm using a different VLAN and vNic when using CIFS, the same switch is used.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 02, 2013 2:45 pm
by anton (staff)
Both sides should be tweaked and put into appropriate state (Jumbo frames enabled on both sides and other settings matched).

You should not use NIC teaming. You should be using MPIO.
Tijz wrote:Thanks, of course I found this sticky post.

But it offers mostly tweaks for the Windows side of the configuration. But as I mentioned, using another initiator than my ESX box, i am getting great performance.
So, i might be a bit stubborn, but I don't seem to have a problem with the configuration on the Starwind box.

The sticky post doesn't mention any specific configuration for ESX.
I don't use NIC Teaming (also no using it on the windows box) and I have bonded the vNic to the iSCSI storage adapter. Also, without bonding any vnic I get the same poor performance.


From a windows VM on the same ESX box, I get great performance using CIFS writing to the starwind box.
Allthough i'm using a different VLAN and vNic when using CIFS, the same switch is used.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 6:50 am
by KenLoveless
This is one of my pet peeves with VMware. Your virtual disks are kinda "trapped" in the VMware server. If you use the client to copy a vmdk file to the Windows computer that the client is installed on you will see the 18MB/s connection speed, too. This is a limitation of the VMware! When using the vSphere client you are using service console resources which are very limited. If you use a product to move data that uses the VCB features of VMware you will not run into this issue.

Another issue that I fought for quite a long time was controller cache. Unless you have a good amount of cache on your controller AND it is backed up by a battery, your ESXi performance will suck! ESXi (and ESX) need cache to function well and the cache will be ignored unless it is BBWC!

Offering this up as my experience, but would be interested to hear any comments to confirm or refute it.

Ken

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 7:25 am
by anton (staff)
We'll provide VMware friendly VSA configurator soon to bypass all of these issues.
KenLoveless wrote:This is one of my pet peeves with VMware. Your virtual disks are kinda "trapped" in the VMware server. If you use the client to copy a vmdk file to the Windows computer that the client is installed on you will see the 18MB/s connection speed, too. This is a limitation of the VMware! When using the vSphere client you are using service console resources which are very limited. If you use a product to move data that uses the VCB features of VMware you will not run into this issue.

Another issue that I fought for quite a long time was controller cache. Unless you have a good amount of cache on your controller AND it is backed up by a battery, your ESXi performance will suck! ESXi (and ESX) need cache to function well and the cache will be ignored unless it is BBWC!

Offering this up as my experience, but would be interested to hear any comments to confirm or refute it.

Ken

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 9:27 am
by Tijz
KenLoveless wrote:This is one of my pet peeves with VMware. Your virtual disks are kinda "trapped" in the VMware server. If you use the client to copy a vmdk file to the Windows computer that the client is installed on you will see the 18MB/s connection speed, too. This is a limitation of the VMware! When using the vSphere client you are using service console resources which are very limited. If you use a product to move data that uses the VCB features of VMware you will not run into this issue.

Another issue that I fought for quite a long time was controller cache. Unless you have a good amount of cache on your controller AND it is backed up by a battery, your ESXi performance will suck! ESXi (and ESX) need cache to function well and the cache will be ignored unless it is BBWC!

Offering this up as my experience, but would be interested to hear any comments to confirm or refute it.

Ken
Thanks for your insight.

But sadly it does not seem to apply to my problem..

Your post let me to try the following:
Create a new VMDK on the iSCSI LUN and mount to a windows virtual machine.
I let the Virtual machine copy an 2GB ISO to this new volume. Still, i'm getting only about 160Mbps throughput.
Reading from the volume is not much better.

Of course I don't have a BBWC, so that might be a problem. However, when doing big sequential reads or writes, like i'm doing, cache isn't important anymore.
But it also that seems inplausible. Because within the ESX host I have no BBWC either, it are just plain old SATA disk without any RAID configured.
writing to those I have no performance issue what so ever.

Any other idea's? Keep in mind, I am getting great performance to the starwind server when using MS iSCSI initiator, so the problem seems to be with VMWARE config..


EDIT:
Also, we are using many QNAP's and other NAS appliances with iSCSI in other environments, none of which have BBWC. We get about 500 to 600MBps of throughput to those boxes.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 10:30 am
by anton (staff)
Did you apply the changes to ESXi box following the link I gave? What TCP numbers do you get with NTtcp and IPerf?

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 10:38 am
by Tijz
anton (staff) wrote:Did you apply the changes to ESXi box following the link I gave? What TCP numbers do you get with NTtcp and IPerf?

What changes? I've bonded my vmkernel port to the iSCSI software adapter.
I have one vNic per host for iSCSI, so I cannot configure teaming.

Surely I should be able to get more than 150Mbps of throughput over the one 1Gbps port? Using a Windows iSCSI initiator i get almost 900Mbps.
I never had this problem with NAS appliances like QNAP or Thecus. (however, this environment has never had any QNAP or Thecus, so don't know for sure that this environment would perform good with QNAP or Thecus)

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 11:00 am
by anton (staff)
1) Did you follow the link I gave? Second part about ESXi client configuration and not Windows server?

2) You DO NOT need to team NICs for iSCSI traffic as MPIO is a way to go.

3) QNAP and Thecus are on a slow side you should really have both TCP and iSCSI doing wire speed.
Tijz wrote:
anton (staff) wrote:Did you apply the changes to ESXi box following the link I gave? What TCP numbers do you get with NTtcp and IPerf?

What changes? I've bonded my vmkernel port to the iSCSI software adapter.
I have one vNic per host for iSCSI, so I cannot configure teaming.

Surely I should be able to get more than 150Mbps of throughput over the one 1Gbps port? Using a Windows iSCSI initiator i get almost 900Mbps.
I never had this problem with NAS appliances like QNAP or Thecus. (however, this environment has never had any QNAP or Thecus, so don't know for sure that this environment would perform good with QNAP or Thecus)

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 4:30 pm
by Tijz
Hi Anton,

Thanks for you're replies, but it's not clear what you mean
1) Did you follow the link I gave? Second part about ESXi client configuration and not Windows server?
I followed this link:
http://www.starwindsoftware.com/forums/ ... t2296.html

The only section that has anything to do with vmware is the first section. I have one vNic, connected to a vSwitch. On that vSwitch is a vmkernel port. This vmkernel is bonded to the vmware sofware iscsi adapter.
2) You DO NOT need to team NICs for iSCSI traffic as MPIO is a way to go.
Ok, good. I'm using MPIO, however, I only have one path.
3) QNAP and Thecus are on a slow side you should really have both TCP and iSCSI doing wire speed.
I do get wire speed using thecus and qnap, that's what I wrote. no problems using thecus and qnap.
Starwind gives me 150Mbps (10% of wire speed) when using vmware as an initiator.
When using windows as an initiator i DO GET wirespeed to the starwind.

I'm at a loss why I do get such poor throughput to the Starwind from vmware.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 4:30 pm
by Tijz
Hi Anton,

Thanks for you're replies, but it's not clear what you mean
1) Did you follow the link I gave? Second part about ESXi client configuration and not Windows server?
I followed this link:
http://www.starwindsoftware.com/forums/ ... t2296.html

The only section that has anything to do with vmware is the first section. I have one vNic, connected to a vSwitch. On that vSwitch is a vmkernel port. This vmkernel is bonded to the vmware sofware iscsi adapter.
2) You DO NOT need to team NICs for iSCSI traffic as MPIO is a way to go.
Ok, good. I'm using MPIO, however, I only have one path.
3) QNAP and Thecus are on a slow side you should really have both TCP and iSCSI doing wire speed.
I do get wire speed using thecus and qnap, that's what I wrote. no problems using thecus and qnap.
Starwind gives me 150Mbps (10% of wire speed) when using vmware as an initiator.
When using windows as an initiator i DO GET wirespeed to the starwind.

I'm at a loss why I do get such poor throughput to the Starwind from vmware.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Thu Jan 03, 2013 4:39 pm
by Tijz
By the way, when using IPerf I get 300Mbps from a VM to the Starwind server using default settings (8KB window size)

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 09, 2013 12:37 am
by philmee95
you have 2 nics on vmware though right? vmware reserves network for management if only 1 nic is present.

Re: ESXi 5.1 performance poor, MS iSCSI initiator fast

Posted: Wed Jan 09, 2013 8:31 am
by Tijz
Yes, i've got a dedicated nic for storage. (i've got 4 actually, but before I got this resolved I keep testing with one)


EDIT:
Yesterday I installed a second ESX host. Sadly with the same hardware as the other one.
This one is not in production yet (the other one is hosting production VM's on local storage), so I have installed different version of ESX.
I installed ESX 5.0 update02.
But I get exactly the same results.

I also configured NFS storage on the Starwind machine, using the MS NFS service. Writing to this I get just 20Mbps throughput..

This can hardly be VMWare's fault. Might it be hardware related? Sadly I can not test this... as I only have those two servers.
Both are supermicro with Intel I340 network adapters.