Using MPIO and NIC Teaming

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
bzabber
Posts: 5
Joined: Tue Jun 23, 2009 9:14 pm

Tue Jun 23, 2009 9:27 pm

Hi folks,

I would like to use NIC teaming for my iSCSI Target and because the MS iSCSI Initiator doesn't support NIC teaming I would like to use MPIO to aggregate my links and provide some fault tolerance. Is this setup supported and recommended or should I use MPIO throughout my setup?

The iSCSI target will be hold a LUN dedicated to CSV, (Clustered Shared Volume), and a LUN for the cluster quorum.

I will have three servers requiring connectivity to the target, two servers will be clustered Hyper-V Hosts running Windows Server 2008 R2 and the third server will be my SCVMM R2 running Windows Server 2008 R2 (all Enterprise Editions).

Originally we were going to team all of the iSCSI NICs but found out that you can't team NICs the iSCSI Initiator 2.0.8 software from MS will be using so I opted to use MPIO on the initiator side but then I wondered if i would have to adjust my architecture to use MPIO on the iSCSI target too.

Thoughts? :)

I'll be sure to post something with my experience in using the Starwind software on Windows Server 2008 R2 when we finally get to that point. :)


Bryan.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Wed Jun 24, 2009 11:41 am

I have been using a combination of NIC teaming and MPIO, although I would prefer not to.

My set up:

Hyper-V nodes (Windows 2008 Datacenter)
1x 10GbE, 2x 1GbE, all Intel running ProSet drivers

Starwind server:
2x 10GbE, Intel ProSet

Switches: Two switches with 24x 1GbE and 4x 10GbE each. Each Hyper-v node is connected to two switches, by a 10GbE connection to one switch and 1GbE to both. The switches are connected to the Starwind server by 10GbE, and to each other by 6x 1GbE arranged as LACP team.

On each Hyper-V node, the 10GbE is used as the primary MPIO path for each iSCSI target. The two 1Gbit connections are teamed together using Adapter Fault Tolerance, and this team is what Hyper-V is using as it's virtual switch. The parent partition's virtual nic on that switch is used for the other MPIO paths. So basically I've got MPIO sitting on top of a team.

Each iSCSI target needs four paths defined to work, so this is very cumbersome to setup and manage, especially when you have to have one iSCSI target for every VHD as I do. I'm really looking forward to switching to CSV!

I'm not 100% sure about the stability of this setup, as I'm not sure that the iSCSI initiator likes using a Hyper-V virtual nic instead of a direct connection. If I had more NICs, I would use two just for MPIO/iSCSI, two as a team for Hyper-V VMs, and another as a management interface.

The iSCSI initiator in R2 seems to have been improved a bit (at least the UI looks different), so your experience may be different. I'm only just beginning to evaluate R2 as Intel have been a little late with drivers...
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Thu Jun 25, 2009 12:02 pm

Just my $0.02

The new MS iSCSI Initiator was not really changed except the GUI (there are different names of the tabs and buttons), everything else is basically the same.

Thanks
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
bzabber
Posts: 5
Joined: Tue Jun 23, 2009 9:14 pm

Thu Jun 25, 2009 4:40 pm

My setup:

Starwind server:
HP DL 380 G5, connected to a MSA20
2x 1GBE Intel NICs
OS: Win2K8 R2 RC Enterprise Ed. (Full)

I will have two Hyper-V hosts, each are HP DL380 G5s and have the same type of NICs, two of which will be used for iSCSI traffic.

The setup is small and won't be used for production workloads so keeping in mind the KIS rule I was leaning towards NIC Teaming on the Starwind Server and then using MPIO on the Hyper-V hosts seeing as MS' iSCSI initiator doesn't support NIC Teaming on NICs used for iSCSI traffic but I thought that from an on-going admin standpoint supporting that setup would be more prone to error and confusion which is why I posted my original question. If it's MPIO throughout then everyone can keep things straight and troubleshooting any issues would be easier because we'd only have to worry about the iSCSI initiator setup on both ends, the Starwind software and the physical connections. If this setup works fine, we'd reuse the approach as we moved towards using Starwind and iSCSI for more critical workloads.

The project I'm on is forging new ground as we're the first to implement W2K8, implementing iSCSI and using Hyper-V so I'd like to get a good set of best-practices out of this so we can reuse this down the road.

We don't have enough NICs to do Teaming and seeing as it's for development purposes we're not too concerned about it but from what I've read there's no issue with having a set of NICs teamed (for mgmt for instance) and then having another pair configured for MPIO, (obviously there's no overlap). I was just concered with whether or not the MS iSCSI Initiator or the Starwind software would have issues with a mixed setup of using NIC Teaming on the iSCSI target server for storage connections and usng MPIO to connect to that set of teamed NICs on the target server. I'll rationalize it all by using MPIO throughout now.

Thanks Aitor_Ibarra for your input.

Bryan
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Fri Jun 26, 2009 12:37 pm

StarWind itself does not really depend on the teamed or non-teamed NICs as it works over an existing ethernet layer. If you manage to team the NICs - StarWind and StarPort will work on top of that.

Thanks
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
Post Reply