Any Experience with Chelsio 10GbE Adapters?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
MarkH
Posts: 5
Joined: Wed Feb 03, 2010 9:14 pm

Wed Feb 03, 2010 9:28 pm

I'm trying to get a pair of Chelsio N320E-CXA adapters to work for iscsi between a Windows 2008 Server R2 box running the StarWind 5.0 target and a vSphere 4.0 host. Before installing the 10GbE adapters, I verified that a regular GigE connection between the two boxes worked correctly -- the StarWind target "sees" the vSphere host and vice-versa, and I'm able to create and use vmfs file systems on the iscsi host. I installed the Chelsio adapters and their associated drivers for the Windows server and the vmware host. Drivers seemed to have installed with no issues, and both systems see the adapaters. I'm able to configure the IP address for the adapter on the Windows box, and able to add a VMkernel port and configure the IP address on the vmware host. I used an address of 10.0.4.1 for the windows host, and 10.0.4.2 for the vmware host, both with netmasks of 255.255.255.0. I'm able to ping from the windows host to the the vmware host, and able to vmkping from the vmware host to the windows hosts using the respective addresses from above.

Setting up iSCSI, however, isn't working. I follow the standard procedure for setting up a target and configuring the iSCSI software initiator on the vmware host. Whether I use dynamic discovery or set a static target on the vmware iscsi adpater, though, a rescan doesn't find the StarWind iscsi target. I'm a bit baffled as to what do to next.

Does anybody have any suggestions (or any experience at all with Chelsio adapters? --I'm a bit concerned about Chelsio itself; nobody answers the phone at their headquarters, not even a receptionist).

-M.
Constantin (staff)

Thu Feb 04, 2010 10:02 am

Are ports 3260 and 3261 open?
Go to hosts - configuration - network and check if both NIC are available there.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Fri Feb 05, 2010 11:50 am

Does anybody have any suggestions (or any experience at all with Chelsio adapters? --I'm a bit concerned about Chelsio itself; nobody answers the phone at their headquarters, not even a receptionist).
You mean there's am 10GbeE NIC maker with *worse* support than Intel?! I remember it was impossible to get beta ProSet drivers for Windows 2008 R2 until just before RC...
Constantin (staff)

Mon Feb 22, 2010 9:51 am

Do you have any updates?
MarkH
Posts: 5
Joined: Wed Feb 03, 2010 9:14 pm

Fri Mar 12, 2010 12:47 am

I got the adapters working (it was indeed a firewall issue -- I'd forgotten that Windows Server 2008 has separate sets of firewall rules for doman/private/public networks, and my iscsi was running on a private network, not the domain network that I'd opened up. Duh.)

The performance of the adapters for iSCSI was terrible -- maybe 10% better than regular 1GbE. At a pure TCP level the adapters got roughly 6.5Gb/sec of throughput (as measured by iperf, between the storage server and the VMware virtual machine. When using iSCSI (either "directly" from the MS iSCSI initiator inside the VM, or "indirectly", via the vmware software initiator), the performance dropped to 1.1-1.2 Gb/sec. I fiddled with a bunch of settings, but nothing made much of a difference. When I talked to Chelsio support (finally), they were useless. The engineer I talked to promised to get back to me on several issues, but never called back. She then ignored my emails and didn't return several voicemails I left her over the course of a week or so. One thing I found sort of amusing: after several days of unsuccessfully trying to get them to respond to me, I happened to notice on my LinkedIn account that a "VP from Chelsio" had searched my profile. :-)

Eventually, I returned the adapters to the supplier who was kind enough to give me a full refund.

-M.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Mar 13, 2010 8:16 pm

You basically confirm our tests here (unfortunately I cannot tell iSCSI offload engine names b/c of the obvious reasons). So it makes more sense to grab comparably cheap Intel or Neterion 10 GbE adapter and "delta" money (difference in price between it and "iSCSI offload" card) should be invested into few extra gigabytes of RAM or next step multi-core CPU. Combined solution should run performance circles around any kind of "hardware accelerator". Which is going to be outdated next 6-12 months in any case :)
MarkH wrote:I got the adapters working (it was indeed a firewall issue -- I'd forgotten that Windows Server 2008 has separate sets of firewall rules for doman/private/public networks, and my iscsi was running on a private network, not the domain network that I'd opened up. Duh.)

The performance of the adapters for iSCSI was terrible -- maybe 10% better than regular 1GbE. At a pure TCP level the adapters got roughly 6.5Gb/sec of throughput (as measured by iperf, between the storage server and the VMware virtual machine. When using iSCSI (either "directly" from the MS iSCSI initiator inside the VM, or "indirectly", via the vmware software initiator), the performance dropped to 1.1-1.2 Gb/sec. I fiddled with a bunch of settings, but nothing made much of a difference. When I talked to Chelsio support (finally), they were useless. The engineer I talked to promised to get back to me on several issues, but never called back. She then ignored my emails and didn't return several voicemails I left her over the course of a week or so. One thing I found sort of amusing: after several days of unsuccessfully trying to get them to respond to me, I happened to notice on my LinkedIn account that a "VP from Chelsio" had searched my profile. :-)

Eventually, I returned the adapters to the supplier who was kind enough to give me a full refund.

-M.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
s.price
Posts: 5
Joined: Tue Apr 27, 2010 1:21 pm

Tue Apr 27, 2010 4:37 pm

I have had a little experience with 10Gbe and VMWare but not with Chelsio. In the process of researching an upgrade to vSphere and 10Gbe I discovered there is very little information on 10Gbe any where. Lots of sales material but not any real world functional information. It took a few weeks but I was able to have some long conference calls with actual engineers at 2 manufacturers. From both engineers, in respect to VMWare they have not seen the throughput of 10Gbe cards break 4Gb/s without the net queue or VMDq function being enabled. This function has to be manually enabled in 3.5 but is supposed to auto enable in 4.0. The catch is that not all 10Gbe cards support the function. The throughput with net queue enabled tips 9 Gb/s and hovers 9.9 Gb/s with jumbo frames enabled. Just note, it takes a lot of work to get it all together.

Side note: the SR-IOV standard will provide an even higher level of VM performance and again not all 10Gbe cards support it.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Apr 28, 2010 9:25 pm

10 GbE shine under real heavy load. So building multiplexing nodes and using 10 GbE for a backbone is OK. But replacing 1 GbE with 10 GbE and expecting to have 10x performance gain... It's not going to happen :) Not like it was even with moving away from 100 Mb -> 1 GbE. "Sad but true" (c) you all know who
s.price wrote:I have had a little experience with 10Gbe and VMWare but not with Chelsio. In the process of researching an upgrade to vSphere and 10Gbe I discovered there is very little information on 10Gbe any where. Lots of sales material but not any real world functional information. It took a few weeks but I was able to have some long conference calls with actual engineers at 2 manufacturers. From both engineers, in respect to VMWare they have not seen the throughput of 10Gbe cards break 4Gb/s without the net queue or VMDq function being enabled. This function has to be manually enabled in 3.5 but is supposed to auto enable in 4.0. The catch is that not all 10Gbe cards support the function. The throughput with net queue enabled tips 9 Gb/s and hovers 9.9 Gb/s with jumbo frames enabled. Just note, it takes a lot of work to get it all together.

Side note: the SR-IOV standard will provide an even higher level of VM performance and again not all 10Gbe cards support it.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply