Fixed path faster than round robin
Posted: Fri May 11, 2012 3:11 pm
Hi,
I have just started testing starwind out and I am having quite strange results at the moment.
I have ESXi 5.0 installed on a Dell R710, it has multiple nics in configured with jumbo frames going through an Avaya/Nortel switch stack on a sperate VLAN to all normal data traffic.
ESXi is configured with two vmkernels linked to one vswitch for iscsi and I have done the port binding to link eack vmk to a seprate nic as per the best practise. I have got an Equallogic using round robin successfully and it saturates these two connections around the 200 mbps mark.
My starwind server has windows 2008 r2 and a quad port intel nic configured for jumbo frames. I have configured a LUN and served it up to my ESXi server. At thyis point I had not set the round robin path policy and I created a new hard disk on this lun on a test vm. My iometer results showed it was saturating the connection at around 117mbps (so all good). I then went on to the vmware command line and sorted out my round robin poliy to change every 8800 bytes and retested. My iometer results were around the 40mpbs mark. Strange. SO i thought I would then loko up starwinds recommendations on round robin and saw you recommend using iops at 1 (or a 2 or 5). I changed my path policy for the lun on ESXi and retested. Again, poor performance.
I started to wonder at this point if this is due to the delayed ack issue or an issue with a setting on my intel nics.
please help!
many thanks,
Paul
I have just started testing starwind out and I am having quite strange results at the moment.
I have ESXi 5.0 installed on a Dell R710, it has multiple nics in configured with jumbo frames going through an Avaya/Nortel switch stack on a sperate VLAN to all normal data traffic.
ESXi is configured with two vmkernels linked to one vswitch for iscsi and I have done the port binding to link eack vmk to a seprate nic as per the best practise. I have got an Equallogic using round robin successfully and it saturates these two connections around the 200 mbps mark.
My starwind server has windows 2008 r2 and a quad port intel nic configured for jumbo frames. I have configured a LUN and served it up to my ESXi server. At thyis point I had not set the round robin path policy and I created a new hard disk on this lun on a test vm. My iometer results showed it was saturating the connection at around 117mbps (so all good). I then went on to the vmware command line and sorted out my round robin poliy to change every 8800 bytes and retested. My iometer results were around the 40mpbs mark. Strange. SO i thought I would then loko up starwinds recommendations on round robin and saw you recommend using iops at 1 (or a 2 or 5). I changed my path policy for the lun on ESXi and retested. Again, poor performance.
I started to wonder at this point if this is due to the delayed ack issue or an issue with a setting on my intel nics.
please help!
many thanks,
Paul