Etherchannel vs Round Robin

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
EinsteinTaylor
Posts: 22
Joined: Thu Aug 30, 2012 12:28 am

Wed Mar 13, 2013 3:07 am

The title is pretty self explanatory. With a multi-nic ESX host, and Multi-Nic starwind HA setup will I get the best performance with Round Robin or with an Etherchannel? Right now in RR mode, I'm getting gigabit wire speeds(around 120MB/s). I'm wondering if I went with Etherchannel would I be able to get that up to close to 2Gigabit?

If Round Robin is best, which NIC teaming should I be using? Right now it is set to route based on IP hash. I know that is supposed to only be when in Etherchannel mode, so thought the experts here could help.

Thanks!
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Mar 13, 2013 4:58 am

1) Do not use NIC teaming with iSCSI. Neither with StarWind nor with anybody else on Earth. Stick with MPIO in RR config.

2) You should be getting ~200MB/sec in a single direction with 2xGbE uplinks. So there's an issue somewhere (disk subsystem performance?)

What one uplink does (second disabled)?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
EinsteinTaylor
Posts: 22
Joined: Thu Aug 30, 2012 12:28 am

Wed Mar 13, 2013 3:29 pm

I may not have been very clear before, so let me back up. No Teaming/etherchannel/bonding is occurring at this time. I currently have 3 ESX hosts, each with 4 NICS with MTU=9000 Jumbo Frames. Those are going through a stack of Cisco 2960S gigabit switches, also with Jumbo Frames enabled. The Starwind system I am trying to deploy currently has 2 x Gigabit NICs dedicated for iSCSI target, and 1 NIC for management, and 1 NIC direct connected to the HA pair for Sync. The 3 Non-management NICS have jumbo frames enabled.

Currently in ESX I am using Round Robin and experimenting with different iops settings. The "NIC Teaming Policy" in the vSwitch is set to route based on IP hash(I think this is wrong).

I'm not getting nearly the speeds I should be. I know it's not an underlying storage issue, because if I run ATTO directly on one of the starwind servers I'm getting reads and writes in excess of 3GB/s so I know it is more than capable of handling what's being thrown at it.

As you have seen in the other thread you and I are discussing, I am not(and will not be) using caching in Starwind.
starczek
Posts: 13
Joined: Mon Mar 05, 2012 1:30 pm

Fri Mar 15, 2013 10:17 am

I have RR policy over two NICs and I'm able to saturate both network cards bandwidth (except linux OS, but that's other story - this problem is not related to SW). My storage is DELL PERC H700 (512MB cache) and RAID50 (8x HDDs).
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Mar 15, 2013 5:26 pm

Well, that is really interesting. It looks like we need to have some benchmarking. Here is the document that should be helpful here:
http://www.starwindsoftware.com/starwin ... t-practice
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply