Poor read performance, good write performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

sjors
Posts: 17
Joined: Sun Jul 03, 2011 5:16 pm

Mon Aug 08, 2011 12:23 pm

starneo wrote: Now everything is running really smooth and quite fast. I have done a ATTO diskbenchmark on all 4 CSV HA targets simultaniously (queue depth=10, 2GB size) and received 26000 I/Os at the Starwind Performance Monitor (overall). I know physically this is not passible (for our Storages), but with over all 30GB Cache (18GB for Starwind and 12GB on each Raidcontroller), it can be true :))
Wow! These numbers have completely blown me away!
I wish I could have those numbers.
Oh well, there are still some issues to iron out for me, in terms of the drop in performance only at 1KB and 2KB.
The rest is working fine for me.
The overall performance is good.
starneo
Posts: 26
Joined: Fri Jan 28, 2011 10:17 am

Mon Aug 08, 2011 1:07 pm

sjors wrote:
starneo wrote: Now everything is running really smooth and quite fast. I have done a ATTO diskbenchmark on all 4 CSV HA targets simultaniously (queue depth=10, 2GB size) and received 26000 I/Os at the Starwind Performance Monitor (overall). I know physically this is not passible (for our Storages), but with over all 30GB Cache (18GB for Starwind and 12GB on each Raidcontroller), it can be true :))
Wow! These numbers have completely blown me away!
I wish I could have those numbers.
Oh well, there are still some issues to iron out for me, in terms of the drop in performance only at 1KB and 2KB.
The rest is working fine for me.
The overall performance is good.

I am not sure if the performance ist good or not good at 0.5 / 1 / 2 / 4KB but its nearly dubbles everytime.
See the Screenshot - it is from a Hyper-V node on one Starwind HA-Target with 10 parallel I/Os.
One thing I forgot, after changing the setting the benchmarkfile is written with aprox 250-300MB/s to the CSV, it was only at 25-30MB/s before.
Attachments
Starwindperformance.JPG
Starwindperformance.JPG (73.03 KiB) Viewed 6326 times
sjors
Posts: 17
Joined: Sun Jul 03, 2011 5:16 pm

Mon Aug 08, 2011 1:15 pm

@starneo

The numbers are good.
I get about half those numbers using 2 NICs with MPIO.

A single connection should give you about 120-125MB/sec maximum (and only with sizes above 512K.)
So, basically, with MPIO, you should multiply those numbers by the number of NICs.

Congrats! You're there with your system!
CyberNBD
Posts: 25
Joined: Fri Mar 25, 2011 10:56 pm

Mon Aug 08, 2011 4:49 pm

Looks good indeed. Modifying the TcpAckFrequency register setting increases performance dramatically.

Using StarWind 5.7 and MS iSCSI initiator i'm getting writes of 3.5MB/sec @ 0.5k upto 220MB/sec @ 64k and higher, QD10.

After my struggle getting the ESXi initiator to work it seems that, in the low end, i'm getting even better results than the MS initiator at the moment: starting at 5.0MB/Sec @ 0.5k upto 220MB/sec @ 64k and higher, QD10.

All above using MPIO over 2 NIC's and no starwind cache 8)
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Aug 08, 2011 8:41 pm

Looks good!

PS Turn StarWind caching ON :)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
CyberNBD
Posts: 25
Joined: Fri Mar 25, 2011 10:56 pm

Mon Aug 08, 2011 9:50 pm

anton (staff) wrote:Looks good!

PS Turn StarWind caching ON :)
Haha :)

It seems that, in my current setup, write performance suffers a bit from enabling Write Through-Cache. Reads improve a bit. So i'm not quite sure what I'm going to do in my final setup.

Since my 24/7 running SAN isn't going to have HA, Write-Back isn't an option. Even if I had I think it would require a REALLY good UPS setup (seperate UPS-es per node) because if both nodes go down due to power/UPS failure HA doesn't matter.

But maybe some day, when i bump into a second MD1000 for a reasonable price and I still can afford the power bills, it will change :P
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Aug 09, 2011 7:09 am

Configure HA and enable Write-Back cache. Not sure what's the reason of you sticking with single node setup. What hypervisor do you use?
CyberNBD wrote:
anton (staff) wrote:Looks good!

PS Turn StarWind caching ON :)
Haha :)

It seems that, in my current setup, write performance suffers a bit from enabling Write Through-Cache. Reads improve a bit. So i'm not quite sure what I'm going to do in my final setup.

Since my 24/7 running SAN isn't going to have HA, Write-Back isn't an option. Even if I had I think it would require a REALLY good UPS setup (seperate UPS-es per node) because if both nodes go down due to power/UPS failure HA doesn't matter.

But maybe some day, when i bump into a second MD1000 for a reasonable price and I still can afford the power bills, it will change :P
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
CyberNBD
Posts: 25
Joined: Fri Mar 25, 2011 10:56 pm

Tue Aug 09, 2011 11:59 am

anton (staff) wrote:Configure HA and enable Write-Back cache. Not sure what's the reason of you sticking with single node setup. What hypervisor do you use?
I'ts my home-setup. Starwind itself isn't virtualized, so if i go HA I will need a second starwind box, which means another PE2950, MD1000 disk enclosure and 15 big SAS disks, + second UPS etc. concerning the power loss issue I explained above.

It's used to serve ESXi VM's and some cluster disks within MS 2k8 R2.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Aug 12, 2011 2:03 pm

It's a pity you don't use Hyper-V as in this case you could re-cycle existing two Hyper-V nodes configured into hypervisor cluster. Without need of any external shared storage at all.
CyberNBD wrote:
anton (staff) wrote:Configure HA and enable Write-Back cache. Not sure what's the reason of you sticking with single node setup. What hypervisor do you use?
I'ts my home-setup. Starwind itself isn't virtualized, so if i go HA I will need a second starwind box, which means another PE2950, MD1000 disk enclosure and 15 big SAS disks, + second UPS etc. concerning the power loss issue I explained above.

It's used to serve ESXi VM's and some cluster disks within MS 2k8 R2.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply