IOmeter testing Starwind storage !?!?!?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Thu Dec 10, 2009 10:11 am

Hi all,

I 'm testing both ESXi v4.x eval and Starwinds 5.o free version.
Since performance is of primary importance to users I 've started testing this using IOmeter.
I 've been reading the following VMware communities thread:
http://communities.vmware.com/thread/19 ... 0&tstart=0

However this seems to be the testing ground for boys who can spend big bucks. :)

What I would like to propose is to start a similar thread within this Starwind forum.
That would make easier to compare experiences.

What do you think :?:

With regards
Jaap
Netherlands
===
I 've put up a small Labtest listed below.
My aim is to provide SMBs with an affordable VMware/swSAN solution.
My (arguable) SMB = 20/150 concurrent users using Exchange/SQL/File/Web server(s).

Will be using single Xeon procs on ESXi and SAN host(s)

LAB setup:
ESXi host, White box 17 920, 12 Gb, ESXi v4, 1x Intel Pro GT nic (to storage network)
Server VM, W2K8std, vmfs, 1 CPU, 8Gb memory, 60Gb virtual disk on Starwind host
SAN Host: White box i7 920, 12Gb, W2K8std with Starwind 5.0, systemdisk WD 1x RE3 TB, Raid0 4x WD RE3 500gb SATA
Netwerk: Vmhost 1Gb E1000 nic, Cisco switch SLM2008 8x 1Gb, Starwind host 1Gb Marvell onboard nic
TCP setting suggested by Starwind, Jumbo frames, flow control, Rcv/Tr buffers 512

-------------------------------------Av. RT ms------Av. IOs/sek------Av. MB/sek-------CPU
Max Throughput-100%Read-----19,6-------------3074---------------96,4---------------16,67
RealLife-60%Rand-65%Read-----38,8-------------1342---------------10,5---------------29,9
Max Throughput-50%Read-------19,6------------2898---------------90,4----------------18,56
Random-8k-70%Read-------------37,6------------1336---------------10,4----------------33,5

Is this any good ?
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Thu Dec 10, 2009 7:39 pm

The trouble with posting numbers is that there are so many variables, eveyone with a different set-up, that it's difficult to know what you are comparing.

In your case, and in most people's to be honest, the limiting factors are the IOPS capability of your disks/raid card and the speed of your network connection. It's fairly easy to saturate a single 1Gbit/sec link with relatively cheap disks, and you will hardly tax Starwind, particularly on a fast modern cpu like you have. If you improve bandwidth (e.g. add more NICs or upgrade to 10Gbit/sec) the limiting factor is going to be the disks and if you can eliminate that bottleneck you might start seeing cpu as the bottleneck.

From my experience, I can reach 50% saturation of 10Gbit/sec NIC quite easily when using SSDs; running Starwind inside a VM that has only one core of a 2.2Ghz Xeon 5520 assigned to it. At that point, CPU is my bottleneck, but I could fix that by giving it more cores, or taking it out of a VM (hyper-v limits you to 4 cores per VM) and adding a 2nd CPU. I think with today's dual cpu motherboards and RAID cards and SSDs, I think you could reach maybe 60 - 80Gbit/sec throughput and that's more determined by how many dual 10Gbit NICs you can get onto a motherboard and still have slots for fast enough RAID cards. IOPs is more of a function of the drives (if IOPS is your primary concern, look at SSDs), although a big write cache on your RAID will help with write IOPS enormously. In my case, the max throughput of my PCIe 1.0 SAS RAID card is 1.2GB per second, so I can saturate 10Gbit/sec with enough fast drives, but to go faster I need more RAID cards and more drives.

With SAS2 / PCIe 2.0 RAID cards coming out, and SSDs improving all the time, and PCIe SSDs, I think the bottleneck for SAN is going to be with networking for some time until 10Gbit/sec is mainstream and a faster speed is available at the high end as CPUs are already really fast.

Each PCIe 2.0 lane can do 500MB/sec, so if you have 7 slots which are 8 lanes each, you have 28GB/sec system bandwidth in theory.Tylersburg chipset can actually handle 72 lanes total. Memory bandwidth with DDR3 1333 is 32GB/sec per cpu, which is plenty of headroom, so assuming that all data has to pass in and out of RAM via the CPU on its way between disk and network, with 7 slots you have 14GB/sec bandwidth. Each slot can do a max of 4GB/sec. So you need four 10Gbit/sec network connections to max out a slot. Unfortunately I think only dual port NICs are currently available. In theory, you could have two dual port NICs maxed out by one RAID card. So go for two RAID cards and four dual port NICs, and I think at best you can get to 8GB/sec max throughput. If you need more IOPS get a third RAID card for the spare slot! But I don't see how you will be able to beat 8GB/sec without a faster chipset and more PCIe lanes/slots. Of course that 8GB/sec excludes all the overheads ;) I'm also ignoring that the NICs are full duplex; so in theory a 10Gbit/sec NIC could be reading and writing 1GB/sec in each direction at the same time.

cheers,

Aitor
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Fri Dec 11, 2009 10:57 am

Hi Aitor,

Thanks for the reply.
And your right, there are a lot of variables.

Maybe my assumption was wrong (and doing no justice to Starwind) thinking that Starwind would be especially used by SMBs.
I thought of Starwind use in eg an environment with a couple of physical servers or VMware servers with the need of shared/redundant storage. That storage would be eg 1/2 (storage) boxes with Starwind and internal HD's.

Thereby lowering the variables a bit.

Restricting to discussion/experiences/best practices with regards to:
- HDs to use: SAS/SATA, brand/type with
- RAID controllers to use (including compatibility with VMware), Raid types, spanning Raid types
- NICs to use (including compatibility with VMware) or
- HBA iSCSI to use (including compatibility with VMware)
- Switches to use
- TCP/IP, NIC-teaming and/or MPIO implementation

And maybe I am underestimating things :)

Greetz

Jaap
Constantin (staff)

Fri Jan 15, 2010 1:26 pm

Also for best performance we recommend to use 9k JumboFrames and registry tweaks
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Sat Jan 16, 2010 4:40 pm

Hi Constantin,

Since thist post I 've moved on :) .
Have tweaks, jumbo, flow control, mpio running
Disks are now the limiting factor.

Also see my addition to topic: Performance Problems with HA-targets

Thanx Jaap
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Jan 16, 2010 6:41 pm

Did you give a try to the cache-enabled builds (you can drop us a message to support@starwindsoftware.com to ask for cache-enabled HA version we'll release as V5.5 soon).
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Sun Jan 17, 2010 8:35 am

Hi Anton,

Huh?
Are there new builds?
Where can I find them on the the site?

Greetz Jaap
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Jan 17, 2010 9:20 am

I think I've clearly indicated in the upper post 1) there are new builds 2) what to do to get hands over them :)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Sun Jan 17, 2010 9:31 am

Hi Anton,

Yes you did ! :)
What I was implying, but didn't want to put too bluntly, that it would be nice that on the website would be mentioned that there are new releases.
Under header 'Resources' for example. With release notes please :D .

Greetz Jaap

PS Are you really on the Virgin Islands. Some guys have all the luck :wink:
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Jan 17, 2010 10:00 am

It's not official release, it's just release candidate suitable of giving away w/o being sorry later )) That's why we don't publish it on the site ))

P.S. I can send you a picture of the "seaside" view out of my window ))
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Sun Jan 17, 2010 10:16 am

Hi Anton,

Ok, got it.

Greetz Jaap

PS. Disgusting :D
Constantin (staff)

Mon Feb 22, 2010 10:49 am

So, can I ask, what is result of your testing?
TomHelp4IT
Posts: 22
Joined: Wed Jul 29, 2009 9:03 pm

Fri Feb 26, 2010 1:48 am

I did a load of comparative testing a while back between Openfiler and Starwind, using exactly the same hardware setup in both cases, but at the moment I can't remember where I saved the results :) The basic result was that in most situations Starwind was noticeably faster, particularly in IO intensive tests. We did come to the conclusion though that this was mostly down to the underlying OS, probably a combination of better Windows storage drivers and NTFS being more efficient.

I'd agree with Aitor, provided you get your configuration correct performance is far more down to your hard disk setup than anything else. Bear in mind what your SAN will be used for too, if its simple bulk file storage/archiving then high sustained read/write rates will be most important but they are fairly easy to achieve with modern storage. If you're thinking of putting a big SQL db or a load of VMs on the SAN then the IOps (transactions per second) will be much more important, and then the more 15k disk spindles the merrier, or even better SSDs if you can stretch to it.

HP Lefthand are being nice to me at the moment so I have a pair of P4300 nodes to test along with their standalone software product, which is more comparable to Starwind (and more expensive!). I'm expecting to find there isn't much difference performance wise but it will be interesting to see how HA affects both of them, I'll post a link to my report when its done.

One final thing I'd mention - gigabit switches are not all created equal and "backplane speeds" seem to be more a marketing dream than technical reality in many cases. With a cheap model you may not encounter any issues during setup and testing but when you hit it with production traffic it may well fall apart. If you're spending money on building yourself a high performance Starwind SAN then budget for a quality switch - it doesn't have to be a layer 3 model but you want something "enterprise" like HP or Cisco.
Constantin (staff)

Fri Feb 26, 2010 9:07 am

First question, did you follow our Best Practices guide?
And have you tested StarWind with our new high speed caching?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Feb 26, 2010 1:30 pm

This is not exactly true... Linux actually has better network and storage stack (OK, from the performance point of view, I know a lot about internal design of both and NT architecture is definitely a winner). So StarWind running on the top of the Linux box actually shows 20-30% better results (mostly latency and I/O load) compared to the NT version USING THE SAME HARDWARE.
TomHelp4IT wrote:I did a load of comparative testing a while back between Openfiler and Starwind, using exactly the same hardware setup in both cases, but at the moment I can't remember where I saved the results :) The basic result was that in most situations Starwind was noticeably faster, particularly in IO intensive tests. We did come to the conclusion though that this was mostly down to the underlying OS, probably a combination of better Windows storage drivers and NTFS being more efficient.

I'd agree with Aitor, provided you get your configuration correct performance is far more down to your hard disk setup than anything else. Bear in mind what your SAN will be used for too, if its simple bulk file storage/archiving then high sustained read/write rates will be most important but they are fairly easy to achieve with modern storage. If you're thinking of putting a big SQL db or a load of VMs on the SAN then the IOps (transactions per second) will be much more important, and then the more 15k disk spindles the merrier, or even better SSDs if you can stretch to it.

HP Lefthand are being nice to me at the moment so I have a pair of P4300 nodes to test along with their standalone software product, which is more comparable to Starwind (and more expensive!). I'm expecting to find there isn't much difference performance wise but it will be interesting to see how HA affects both of them, I'll post a link to my report when its done.

One final thing I'd mention - gigabit switches are not all created equal and "backplane speeds" seem to be more a marketing dream than technical reality in many cases. With a cheap model you may not encounter any issues during setup and testing but when you hit it with production traffic it may well fall apart. If you're spending money on building yourself a high performance Starwind SAN then budget for a quality switch - it doesn't have to be a layer 3 model but you want something "enterprise" like HP or Cisco.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply