Performance issues :(

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
stup
Posts: 5
Joined: Wed Nov 18, 2009 8:17 pm

Sun Mar 14, 2010 11:45 pm

Running starwind 5.2 build 20100211 on ML115 G5 with Win 2008 R2 x64 and 8GB RAM. NIC is an HP NC382T

ESXi 4.0 Server connected to starwind server. ESX server is ML115 G5. NIC is HP NC382T, connected to starwind server via cat 6 crossover.

VM running on ESX, Win 2008 R2 x64 with 2GB RAM.

iperf between vm and starwind server shows the following:

C:\>iperf -c 192.168.42.254 -P 2
------------------------------------------------------------
Client connecting to 192.168.42.254, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[164] local 192.168.42.2 port 49331 connected with 192.168.42.254 port 5001
[156] local 192.168.42.2 port 49330 connected with 192.168.42.254 port 5001
[ ID] Interval Transfer Bandwidth
[156] 0.0-10.0 sec 534 MBytes 447 Mbits/sec
[164] 0.0-10.0 sec 533 MBytes 446 Mbits/sec
[SUM] 0.0-10.0 sec 1.04 GBytes 893 Mbits/sec

HD Tune Pro run on the starwind server to local SATAII drive gives ~150 MB/s

If I present an iSCSI LUN (image file on a single SATAII drive) to the ESX server (using ESX software initiator), then attach that to the VM, HD Tune Pro in the VM gets ~30 MB/s

If I detach this LUN from the ESX server, and connect it to the VM directly via MS iSCSI initiator, I get the same from HD Tune Pro, ~30 MB/s

If I perform the same tests with a Starwind RAMDrive LUN (4GB), in both tests I get ~50 MB/s


I've installed TCP tweaks, and tried jumbo frames on and off, but no noticable differences.


Surely I should be getting better speeds?

Cheers for any help!
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Mar 15, 2010 11:19 am

First of all your IPerf results are pretty low. You should get at least 950-980 megabits in single direction. With 440 megabit your network can push only 55 megabytes per second,
so with added latency (and HD xxx does SEQUENTIAL reads and writes, they don't overlap and don't allow combined network & disk I/O subsystems to work in pipeline mode) your
30 megabytes per second look pretty close to truth. First of all I'd push network. If you don't get wire speed even with 1500 byte MTU - you cannot proceed further. Then you
need to replace pure sequential test tool with something like Intel I/O meter or SQLio (synthetic test closest to your real job). Then run the tests... For now I'd play with network
hardware and NIC drivers and their settings. As a starting point.

P.S. You need to understand very basic AXIOM: it's not StarWind problem, it's configuration issue. As today StarWind holds "fastest iSCSI target on Earth" crown as it was we who
was used by Intel and Microsoft to demo 1M IOps over iSCSI. No other target stands even close. See link below. And it was 4.x branch used, 5.x added 15-20% less latency and also
very effective re-worked write-back cache.

http://download.intel.com/support/netwo ... scsiwp.pdf
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
stup
Posts: 5
Joined: Wed Nov 18, 2009 8:17 pm

Mon Mar 15, 2010 4:47 pm

Cheers Anton

TBH I'm not sure what else I can tweak. I'm running server-grade NICs, there's no switch involved as it's CAT 6 crossover. ESXi is latest build. Starwind server is running latest HP drivers for the NICs.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Mon Mar 15, 2010 4:58 pm

For gigabit ethernet you shouldn't need an actual crossover cable - a normal straight-through one should work fine.
Another thing you could try is a basic windows fileshare amd test performance of that. If you hit the same speed barrier with a large sequential file, then that would further point to the network being an issue - maybe try older drivers?

With modern cpu's and good underlying storage it's very easy to get wirespeed performance out of Starwind; maxing out 1Gbit/sec Ethernet is easy and 10Gbit/sec isn't particularly difficult either. When there's a peformance problem it's usually elsewhere; networking, firewalls, disk subsystem, drivers; I've never had a performance problem that was actually due to Starwind (caveat: I've only every connected using MS iSCSI initiator, so have no experience of ESX's initiator).

I hadn't read the whitepaper but had watched the associated webcast and was curious to know which target Intel had used. Even in the whitepaper, the mention of Starwind (and Wintarget aka MS WSS iSCSI target) is pretty buried. In fact, as Intel were making such a big thing of CRC features in the Nehalem Xeons, I was assuming that the target wasn't Starwind as I've never been able to succesfully connect to a Starwind target if the CRC feature is enabled in the MS initiator.

Anyway, the Intel paper has been seen as a validation of iSCSI as being enterprise-ready, so it's a shame that the use of Starwind wasn't made a bit more obvious. I guess it's down to your marketing guys to remedy that.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Mar 15, 2010 5:03 pm

See above. If he has TCP doing 440 megabits - no chance to see big numbers (( I'd experiment with the different drivers, MTU size on both ends and started with a RAM disk as a beginning for performance tune.

StarWind game has its own rules but there's no voodoo magic here - iSCSI running over TCP cannot run faster then TCP itself.

Intel was pushing their own solution they did not care much about other members of the clan :))
Aitor_Ibarra wrote:For gigabit ethernet you shouldn't need an actual crossover cable - a normal straight-through one should work fine.
Another thing you could try is a basic windows fileshare amd test performance of that. If you hit the same speed barrier with a large sequential file, then that would further point to the network being an issue - maybe try older drivers?

With modern cpu's and good underlying storage it's very easy to get wirespeed performance out of Starwind; maxing out 1Gbit/sec Ethernet is easy and 10Gbit/sec isn't particularly difficult either. When there's a peformance problem it's usually elsewhere; networking, firewalls, disk subsystem, drivers; I've never had a performance problem that was actually due to Starwind (caveat: I've only every connected using MS iSCSI initiator, so have no experience of ESX's initiator).

I hadn't read the whitepaper but had watched the associated webcast and was curious to know which target Intel had used. Even in the whitepaper, the mention of Starwind (and Wintarget aka MS WSS iSCSI target) is pretty buried. In fact, as Intel were making such a big thing of CRC features in the Nehalem Xeons, I was assuming that the target wasn't Starwind as I've never been able to succesfully connect to a Starwind target if the CRC feature is enabled in the MS initiator.

Anyway, the Intel paper has been seen as a validation of iSCSI as being enterprise-ready, so it's a shame that the use of Starwind wasn't made a bit more obvious. I guess it's down to your marketing guys to remedy that.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply