Horrible Sequential write performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Mar 10, 2011 8:54 pm

Thank you very much for confirmation! Your cooperation is highly appreciated. Really!
hixont wrote:I'm running on HP equipment and I haven't had any problems of the nature discussed here. I'm using two DL370s for the Starwind SAN (144 GB of RAM for caching) and DL160s for the Hyper-V hosts using clustered shared volumes.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Fri Mar 11, 2011 12:39 am

No if only I could get my installation to perform well. I have been instructed to use ATTO instead of IOMETER for testing and I ran it on an existing installation of Starwind that works fine, and on my problem child installation. (with much better hardware I must add) These tests were performed on the Hyper-V host (outside of any VM) directly to iSCSI storage running from Starwind.

First image you can see a nice linear progression of read/write speed all the way up to 64K on the existing installation.
Screen shot 2011-03-10 at 7.27.51 PM.png
Screen shot 2011-03-10 at 7.27.51 PM.png (11.26 KiB) Viewed 3871 times


On the second image I am reading at 25K a second at 512B Transfer length. I could run a VM off a USB stick and get better performance.
Screen shot 2011-03-10 at 7.29.23 PM.png
Screen shot 2011-03-10 at 7.29.23 PM.png (7.03 KiB) Viewed 3873 times
Any ideas, I am at the end of my rope....
@ziz (staff)
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Fri Mar 11, 2011 10:46 am

Please, provide more details about targets you are testing here.
What are these images you are testing?
Where are you measuring the performance from?
Which RAID are used for first image and for second image?
Please note that on the screenshots you added there are no Transfer rate, it would be good to see it too.

camealy wrote:No if only I could get my installation to perform well. I have been instructed to use ATTO instead of IOMETER for testing and I ran it on an existing installation of Starwind that works fine, and on my problem child installation. (with much better hardware I must add) These tests were performed on the Hyper-V host (outside of any VM) directly to iSCSI storage running from Starwind.

First image you can see a nice linear progression of read/write speed all the way up to 64K on the existing installation.
Screen shot 2011-03-10 at 7.27.51 PM.png


On the second image I am reading at 25K a second at 512B Transfer length. I could run a VM off a USB stick and get better performance.
Screen shot 2011-03-10 at 7.29.23 PM.png
Any ideas, I am at the end of my rope....
Aziz Keissi
Technical Engineer
StarWind Software
nbarsotti
Posts: 38
Joined: Mon Nov 23, 2009 6:22 pm

Tue Mar 22, 2011 4:44 pm

Hello, I have a very similar problem. I running a single Starwind server and 2 ESXi v4.1u1 hosts. The Starwind raid array is a 14 SSD raid 10 setup which is very fast when accessed from starwind server as local disk. I have three different NICs connected from the starwind to the ESX servers, Intel 1GB, Intel 10GB, and Marvell 1GB. Here are the performance difference between local disk and inside a vm using the quick benchmarking tool CrystalDiskMark v3.0.1.


Directly on the Starwind server:
Read Write
Sequential 2023 MB/s 2351 MB/S
512k random 1943 MB/s 2092 MB/s
4k random 121 MB/s 124 MB/s
4K rand qd32 287 MB/s 306 MB/s

Inside VM using Intel 10 Gb:
Read Write
Sequential 268 MB/s 23 MB/S
512k random 253 MB/s 8 MB/s
4k random 10 MB/s 9 MB/s
4K rand qd32 147 MB/s 101 MB/s

Inside VM using Intel 1 Gb:
Read Write
Sequential 97 MB/s 64 MB/S
512k random 63 MB/s 5 MB/s
4k random 8 MB/s 7 MB/s
4K rand qd32 95 MB/s 81 MB/s

Inside VM using Marvell Yukon 1 Gb:
Read Write
Sequential 70 MB/s 11 MB/S
512k random 84 MB/s 9 MB/s
4k random 6 MB/s 4 MB/s
4K rand qd32 103 MB/s 101 MB/s


So each of the NIC can support very high transfer rates as seen in the sequential read tests. And the raid array can support very fast random read and write, but not over iSCSI all the random IO over iSCSI is very slow.

I was just told that there is a newer build of v5.6 that was just released for download. I was told it might help but since this is not a HA cluster it is hard to bring down for testing. Please keep me updated with your problem. I hope we find a solution.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Mar 22, 2011 9:00 pm

1) Update your software. Hope you'd manage to allow some downtime as you're non-HA...

2) Apply the mods from referenced URL. It does not really matter do you have HA or non-HA config.

3) We've found NUMA machines has write performance issue but I don't think it's your case.

P.S. And dump CrystalMark as it's known to produce bogus data too often.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply