Using Multipathing to improve ESX-Starwind bandwidth

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
peekay
Posts: 27
Joined: Sun Aug 09, 2009 2:38 am

Tue Sep 01, 2009 9:05 pm

UPDATED - FINALLY Added configuration diagram!!!
Upgraded to ESX4 U1 and all works the same.
Upgraded to StarWind 5.2 without problems.


I have read many great posts on using multipathing to improve bandwidth between an ESX server and a Starwind server. Here is our setup:

ESX 3.5 U4 server = Dell R900 w/12 cores, 64GB RAM, 2x73GB SAS HD in RAID 1, 8xE1000 type NIC ports, QLogic QLA4062 dual iSCSI HBA
Starwind Server = Supermicro System w/E5530 Quad core, 12GB RAM, 13x300GB 10K SAS drives, Adaptec SAS controller, 6xE1000 type NIC ports
LAN Switch (for iSCSI traffic only) = Netgear GS724T smartswitch

The Qlogic iSCSI HBA helps with the iSCSI offloading on the Dell server (versus using the ESX iSCSI software solution). We were looking to improve the iSCSI connection bandwitdth between ESX and Starwind by having more than one 1Gbps connection without teaming the NICs, which is not supported by ESX. Ideally we wanted each channel of the QLogic to have a dedicated 1Gbps "port" on the Starwind server. Here is what we did:

Starwind server:
Using two of the Intel NICs, we setup each NIC on a different subnet:
NIC 1: IP=192.168.0.100, MASK=255.255.255.0 (no gateway)
NIC 2: IP=192.168.1.100, MASK=255.255.255.0 (no gateway)

Any devices configured on Starwind will be "presented" on both those IPs.

ESX 3.5 U4 server:
Configured each channel of the QLogic card (or using the ESX software iSCSI) for similar subnets:
QLogic Channel 1: IP=192.168.0.10, MASK=255.255.255.0 (no gateway)
QLogic Channel 2: IP=192.168.1.10, MASK=255.255.255.0 (no gateway)

The two channels of two servers were connected to the Netgear switch. When both QLogic storage adapters were "scanned", the same Starwind adapters appeared on both channels. In this configuration multipathing WILL NOT WORK!! This is because the subnets are not physically seperated on the Netgear switch. This was also shown by using the esxcfg-mpath -l command on the ESX server which showed not paths.

In order to fix this, we created two port-based VLANs on the switch, with each NIC subnet on a seperate VLAN. This could also be done using two LAN switches. As soon as the subnets were seperated, the multipathing kicked in. Now, using the esxcfg-mpath -l command on the ESX server indicated two paths for each device on the Starwind server. To complete the configuration, we just needed to set the path "preferences" for each attached LUN.

This was done by right-clicking the LUN on the VMware Infrastructure Client (configure/storage) and selecting properties and then selecting the "Manage Paths" button. There, the two "paths" for that device were shown, with one showing as "preferred". We have four devices configured so for two of them, we selected one path as "preferred" while for the other two devices we selected the other path. Each path represents a different channel/NIC as we had setup before. So now, traffic for each device/LUN is split over the two paths. This essentially spreads the load over two 1 Gpbs channels. Voila!

I was too busy to include screenshots but If anyone needs them, let me know and I will find a way to get them in.
Attachments
System configuration
System configuration
iSCSI multipathing.jpg (46.76 KiB) Viewed 2202 times
Last edited by peekay on Wed Feb 03, 2010 3:26 pm, edited 9 times in total.
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Wed Sep 02, 2009 12:35 pm

Peekay,

Thank you for a detailed description of the resolution!

If you find time for uploading the screenshots - that would be great! This is possible on this forum - just open "upload attachments" inlay.

Thanks.
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
Post Reply