The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)
Tijz wrote:Thanks, of course I found this sticky post.
But it offers mostly tweaks for the Windows side of the configuration. But as I mentioned, using another initiator than my ESX box, i am getting great performance.
So, i might be a bit stubborn, but I don't seem to have a problem with the configuration on the Starwind box.
The sticky post doesn't mention any specific configuration for ESX.
I don't use NIC Teaming (also no using it on the windows box) and I have bonded the vNic to the iSCSI storage adapter. Also, without bonding any vnic I get the same poor performance.
From a windows VM on the same ESX box, I get great performance using CIFS writing to the starwind box.
Allthough i'm using a different VLAN and vNic when using CIFS, the same switch is used.
KenLoveless wrote:This is one of my pet peeves with VMware. Your virtual disks are kinda "trapped" in the VMware server. If you use the client to copy a vmdk file to the Windows computer that the client is installed on you will see the 18MB/s connection speed, too. This is a limitation of the VMware! When using the vSphere client you are using service console resources which are very limited. If you use a product to move data that uses the VCB features of VMware you will not run into this issue.
Another issue that I fought for quite a long time was controller cache. Unless you have a good amount of cache on your controller AND it is backed up by a battery, your ESXi performance will suck! ESXi (and ESX) need cache to function well and the cache will be ignored unless it is BBWC!
Offering this up as my experience, but would be interested to hear any comments to confirm or refute it.
Ken
Thanks for your insight.KenLoveless wrote:This is one of my pet peeves with VMware. Your virtual disks are kinda "trapped" in the VMware server. If you use the client to copy a vmdk file to the Windows computer that the client is installed on you will see the 18MB/s connection speed, too. This is a limitation of the VMware! When using the vSphere client you are using service console resources which are very limited. If you use a product to move data that uses the VCB features of VMware you will not run into this issue.
Another issue that I fought for quite a long time was controller cache. Unless you have a good amount of cache on your controller AND it is backed up by a battery, your ESXi performance will suck! ESXi (and ESX) need cache to function well and the cache will be ignored unless it is BBWC!
Offering this up as my experience, but would be interested to hear any comments to confirm or refute it.
Ken
anton (staff) wrote:Did you apply the changes to ESXi box following the link I gave? What TCP numbers do you get with NTtcp and IPerf?
Tijz wrote:anton (staff) wrote:Did you apply the changes to ESXi box following the link I gave? What TCP numbers do you get with NTtcp and IPerf?
What changes? I've bonded my vmkernel port to the iSCSI software adapter.
I have one vNic per host for iSCSI, so I cannot configure teaming.
Surely I should be able to get more than 150Mbps of throughput over the one 1Gbps port? Using a Windows iSCSI initiator i get almost 900Mbps.
I never had this problem with NAS appliances like QNAP or Thecus. (however, this environment has never had any QNAP or Thecus, so don't know for sure that this environment would perform good with QNAP or Thecus)
I followed this link:1) Did you follow the link I gave? Second part about ESXi client configuration and not Windows server?
Ok, good. I'm using MPIO, however, I only have one path.2) You DO NOT need to team NICs for iSCSI traffic as MPIO is a way to go.
I do get wire speed using thecus and qnap, that's what I wrote. no problems using thecus and qnap.3) QNAP and Thecus are on a slow side you should really have both TCP and iSCSI doing wire speed.
I followed this link:1) Did you follow the link I gave? Second part about ESXi client configuration and not Windows server?
Ok, good. I'm using MPIO, however, I only have one path.2) You DO NOT need to team NICs for iSCSI traffic as MPIO is a way to go.
I do get wire speed using thecus and qnap, that's what I wrote. no problems using thecus and qnap.3) QNAP and Thecus are on a slow side you should really have both TCP and iSCSI doing wire speed.