Poor speed so far on new servers...

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Thu Oct 08, 2009 2:16 pm

Quick reality check on my production network...
starwind_hv_network.gif
starwind_hv_network.gif (16.13 KiB) Viewed 30911 times
My set up is very different from yours, and my quick test not very scientific, but anyway...

Starwind 4.2 is running on Windows 2008 R2 standard. This is a Hyper-V VM with just one core of a quad core 2.2Ghz Nehalem Xeon and 1GB RAM. The box has on board LSI SAS2 RAID and an an Areca 1680ix with 4GB of cache. It also has Intel 10GbE (dual port, but only one port used in above test). Hyper-V R2 now supports jumbo frames within VMs, which makes it practical to run Starwind inside a VM.

To get the above, all I did was run some disk intensive tasks in a couple of VMs running on other hyper-v nodes. These are also hooked up with 10GbE. The client VMS run off VHDs, which are in CSV volumes which are actually Starwind targets. I only used two VMs in the test; could do more but don't want to overly disrupt my customers, whose vhds are held on the same physical drives.

Anyway...

VM 1 backed itself up. This was NT Backup and essentially meant reading off a VHD->CSV->Starwind IMG->hyper-v pass through disk->Areca RAID->SAS 10K drive RAID1, and writing to a VHD->CSV->Starwind disk bridge->hyper-v pass through disk->LSI RAID->SATA 5.4K drive.

VM 2 ran the ATTO disk benchmark. This was reading and writing to VHD->CSV->Starwind IMG->hyper-v pass through disk->Areca RAID->SAS 10K drive RAID1, but a different pair of physical drives than VM 1.

Both the VMS were running Windows 2003 R2.

As you can see I hit about 17% of 10GbE, which is roughly 170MB/sec. I've done better speeds when working against Intel SSD, or a RAM disk. The Atto benchmark reported a max read of about 220MB/sec, and a max write of 180MB/sec, but I trust the network % figure more than Atto.

I haven't tuned this setup in anyway beyond enabling jumbo frames. I've even got non-iSCSI traffic running over the same VLAN (unavoidable as my hyper-v nodes don't have many network ports).

I have no idea what's causing you to max out at much lower speeds, and as there are so many variables (network hardware, hard drives, raid cards, driver versions, iscsi initiators, virtualisation platforms etc) I don't think you can directly compare my results to yours, but I think it's probably safe to say that the bottleneck isn't Starwind (particularly as you said you got similar results with the MS iSCSI target).

cheers,

Aitor
mrgimper
Posts: 4
Joined: Thu Oct 08, 2009 11:18 am

Thu Oct 08, 2009 7:02 pm

The OP says they tried the MS iSCSI Initiator (to disprove ESX's iSCSI Initiator), not the MS iSCSI Target. Therefore the problem could still be StarWind.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Thu Oct 08, 2009 7:21 pm

mrgimper wrote:The OP says they tried the MS iSCSI Initiator (to disprove ESX's iSCSI Initiator), not the MS iSCSI Target. Therefore the problem could still be StarWind.
I suppose you can't rule out Starwind completely, but the OP did say that MS target was giving similar results (a few posts down):
adns wrote:I've done some additional testing and a few things that I thought were noteworthy:

When I run two simultaneous hard drive benchmarks from 2 separate VMs, both tests run slower rather than my NIC utilization increasing. I can't get past 25% utilization (except with a RAM drive).

I installed the Microsoft iSCSI target 3.2 and tested on that target and I'm getting roughly the same throughput results as the StarWind target.
mrgimper
Posts: 4
Joined: Thu Oct 08, 2009 11:18 am

Thu Oct 08, 2009 7:36 pm

Aitor_Ibarra wrote:
mrgimper wrote:The OP says they tried the MS iSCSI Initiator (to disprove ESX's iSCSI Initiator), not the MS iSCSI Target. Therefore the problem could still be StarWind.
I suppose you can't rule out Starwind completely, but the OP did say that MS target was giving similar results (a few posts down):
adns wrote:I've done some additional testing and a few things that I thought were noteworthy:

When I run two simultaneous hard drive benchmarks from 2 separate VMs, both tests run slower rather than my NIC utilization increasing. I can't get past 25% utilization (except with a RAM drive).

I installed the Microsoft iSCSI target 3.2 and tested on that target and I'm getting roughly the same throughput results as the StarWind target.
My apologies, I thought the MS iSCSI target could only be installed on 2008 Storage Server..
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Thu Oct 08, 2009 7:46 pm

Officially, that is true. The only way to license it is to buy it from a server OEM with Storage Server with appropriate hardware. That rules it out for me. I have to license everything Microsoft via SPLA. If it's not available on SPLA, then I'm not allowed to use it, although maybe the situation is different for storage appliances, or for XP Embedded (which is used by Lacie and EMC, at opposite ends of the price spectrum!).

If you work for Microsoft, you can get it, and I think you can download a trial of storage server + iscsi target...
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Thu Oct 08, 2009 7:50 pm

My testing continues...

I gave in and ordered 2 Intel 10Gbe cards and they'll be here tomorrow. In the meantime, I decided to take out the HP Smart Array P410 with 512MB battery backed cache and put in an Adaptec 2405 which has 128MB of cache with no battery (and is about half the price!) and then built a new array and partition. I ran HD Tune locally and I got about 425MB/sec (up from 95MB/sec on the HP controller)! So I proceeded to serve up a target via StarWind and now I am pulling about 50MB/sec (up from 30MB/sec) and my NIC utilization is around 50%. So then remembering that I got faster speeds via NFS I ran the same test on an NFS share and I got the same speeds as before (60MB/sec).

So in summary, the cheaper Adaptec controller improved local speed (4.5x) and iSCSI speed (2x) but NFS stayed the same. One of the other members mentioned slownes on an HP also, perhaps they are using an HP Smart Array RAID card? So I'm a bit confused but headed in the right direction...I think! :? I'll test everything again on 10Gbe tomorrow and report my results.

One question that is a bit off-topic. Does StarWind recommend any RAID controllers that consistently work better than others?

And to clarify, yes I did install the MS iSCSI target at one point as we have a testing license for Storage Server 2008.
mrgimper
Posts: 4
Joined: Thu Oct 08, 2009 11:18 am

Sat Oct 10, 2009 12:39 pm

< Blatant advertisment post removed by Moderator, user banned forever >
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Sat Oct 10, 2009 9:26 pm

Before I go any further I want to better understand HD Tune's Benchmark test because one part of HD Tune that I haven't talked a lot about is the File Benchmark Test. This test has also been steadily improving as I tweak away and some of the numbers are surely of some interest. There's not much in the manual but here's what it says about what the Benchmark test does. (It is comprised of 4 tests but my focus has been on the Transfer Rate portion):
Transfer rate test: The data transfer rate is measured accross the entire disk (0% to 100% of the disk's capacity). The unit of measurement is MB/s (1 MB = 1024 KB = 1024 bytes).
Access Time: The average access time is measured and displayed in milli-seconds (ms). The measured access times are shown on the graph as the yellow dots.
Burst Rate: The burst rate is the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system.
CPU Usage: The CPU usage shows how much CPU time (in %) the system needs to read data from the hard disk.
And here is what it says about the File Benchmark test:
The file benchmark measures the performance for reading and writing files to the selected hard disk partition with different block sizes ranging from 0.5 KB to 8192 KB (x-axis). The length of the test files can be set. For accurate results a large size is recommended. If the file length is too small the hard disk may be able to cache the entire file. In that case the cache speed will be measured instead of the hard disk throughput.
I'm no disk expert so bear with me on this, but it seems that the Transfer Rate Test is is a low-level raw throughput test of the disk that tests each sector(?) whereas the File Transfer Test is testing the higher layer dealing with blocks (or files?). (Does an OSI model exist for storage like it does for networking because I'm making a parallel in my mind here!) If I'm off on this assessment please correct me!

So enough talk let's look at results from both tests. I got my 10Gig Intel NICs and connected them directly to each other (no switch is involved). With the Benchmark Test I average 72MB/sec (up from 50MB/sec) which is around the speed of a desktop SATA drive, so at least I've gotten to that point. During the test the 10Gbe card is about 6% utilized (600Mbits).
Image

And here are the results from the File Benchmark Test. Before I got the 10Gbe NIC I could never break the 100MB/sec barrier since that was maxing out the gigabit link, but now on large block sizes I can get above 150MB/sec and I should be able to go beyond that still!
Image

I have a better RAID card that will be here on Tuesday and once I get that I will be able to run these tests again. Once I have all my hardware optimized I am going to revisit StarWind vs NFS vs other the iSCSI targets. This whole thing has been a long line of trial and error but I hope the end result will be a fast cost-effective iSCSI SAN solution.
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Mon Oct 12, 2009 4:25 pm

Yes, the program is quite easy to use and it will intuitively point you into the bottlenecks direction. I would suggest installing this on both the Initiator and Target.

Thanks
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Tue Oct 20, 2009 7:09 pm

So I tested the exact same setup with some of the StarWind competitors and found that as is the systems in my lab are able to hit much higher speeds. I'm not going to go into details here since this is the Starwind forums, but one of the products I used acheived incredible throughputs and really made the most use of the 10Gbe nics and system RAM. I can only conclude that this is because Starwind version 4 does not do any caching of I/O and so it cannot approach the same speeds I had on other products. I also see that StarWind 5 will be arriving soon, so my question to the StarWind staff is will version 5 address this speed issue?
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Wed Oct 21, 2009 10:39 am

adns,

I haven't tested mirroring properly (I intend to go straight to HA when SW 5 comes out), but when you set up a mirror target, there is an option to specify how much system RAM to use for cacheing. Basically the mirror target is like a virtual target, starwind then uses starport to talk to the actual iscsi targets (which could be on different boxes, it may even be possible to use a non Starwind iscsi target). The virtual target has a user-definable RAM cache. How efficient this cache is I can't say. What I would say, about any cache that uses system RAM, is definitely don't do this on a box that doesn't have ECC RAM, a UPS, and make sure that you can absolutely trust it not to crash before the cache is flushed to disk! I'd give it a couple of months of soak testing before going into production with it.

Back to performance testing, have you monitored CPU usage as well as network usage? Using Intel X25-M SSDs as the storage device, I've been able to reach 50% utilisation peaks of 10GbE, but at that point CPU on the Starwind VM is maxed out. However, as I've only given it a single core, I could probably hit 100% 10GbE utilisation if I gave it more. RAM cacheing would probably double the work the CPU has to do, so to go beyond 10GbE I'd probably have to run Starwind natively, not in a VM (Hyper-V currently has a limit of four cores per VM), or use faster CPUs.

If the other targets you've been able to test with achieve high utilisation of 10GbE with low cpu utilisation and RAM caching, then that's pretty impressive.

cheers,

Aitor
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Wed Oct 21, 2009 10:11 pm

FYI: I was told by a StarWind rep today that I/O caching will most likely not be included in the 5.0 release and that we may have to wait until a subsequent 5.x release. But we'll know soon enough in about a week, so take this with a grain of salt for now. Either way as soon as I can get a copy of 5.0 I will install it and see how it does speedwise.
TomHelp4IT
Posts: 22
Joined: Wed Jul 29, 2009 9:03 pm

Sun Oct 25, 2009 3:00 am

I've done a lot of benchmarking with HP SA640 RAID controllers on Windows 2003 with Starwind 4.2 and have no problem maxing out a gigabit interface with sequential reads.

That doesnt really help with the problem in this post though - I would expect you to see fairly similar speeds using the MS iSCSI to a local target as you would just accessing the local disk direct, the fact you dont rather suggests to me that something odd is going on with Starwind or your IP stack. What will hopefully be more useful is that I've just built a new pair of SAN servers for a client - HP DL180 G6s with 8x WD RE3 1TB SATA HDs in a RAID50 array connected to a P410i 512MB BBWC, Win2k8 and Starwind 4.2. I've only run local benchmarks so far and its impressively fast on sequential reads, "real life" test (65/35 r/w, 60% random, 8Kb transactions) even manages 9MB/s which isn't bad for cheap SATA drives either.

I should get the iSCSI benchmarks done in the next couple of days, will post the results for comparison. These servers are supposed to be going out to a client in a week so I hope there isnt some bizarre driver issue with the HP controllers.
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Tue Oct 27, 2009 9:59 am

Tom,

We will be thankful if you post the results of your tests.

Thanks.
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Thu Oct 29, 2009 6:25 pm

Good news! I've tested a beta version of 5.0 and the speed is much improved over 4.2 in my setup. I will post full results once the final build is released to the public in a few days.
Post Reply