Horrible Sequential write performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Mon Feb 28, 2011 7:02 pm

I have tried to get support through the web site and email, and it has been slow at best. So I figure why not try the forums while I wait! :roll:

I have an HA Starwind SAN running 5.6 and am having problems with the Hyper-V VM's running on this cluster running TERRIBLE. I don't know if it is 5.6 specifically or something wrong in my setup. I will try to cover all the bases below...

-4 new HP DL Servers (2 HA nodes/2 Hyper-V cluster nodes)
-3 separate raid arrays (won't get into details because the problem as you will see below is the DIFFERENCE between native performance on an HA node and through Starwind from a cluster node)
-All bonded Intel Gigabit ET NIC's throughout
-HP E5500 Switches (x2 ring stacked, bonds separated between each switch)
-5.6 running 6 HA targets (1 Quorum/5 storage) 512MB Write Cache on each
-All new cabling
-2008 R2 all around
-TCP Tweaks in forum performed all around

-ntttcp tests all directions (SAN/SYNC/SYNC2) between HA nodes and Cluster to HA 890 to 930 Mbs (while in slight use)
-IOMETER below (including a single side RAM disk for testing purposes) (IOPS/MBs/Access)
-Tests were top to bottom as follows.... Inside Hyper-V VM running on direct attached SATA/direct attached SATA without Hyper-V/direct attached SSD/local on SATA array directly on Starwind HA node/directly on Quorum disk through cluster host's SAN access without CSV's/same but on RAM disk/and then the final inside a Hyper-V VM running inside the HA storage

-As you can see the big problem is the difference in performance between 4 and 5. Once it runs through iSCSI we drop 2-6 times in speed. (which only gets worse inside test 7) I don't know if it could be 5.6 or one of the TCP tweaks (which I had never performed before)

-----
256 100% sequential, 100% write

SATA Hyper-V 124/31/16
SATA DAS 154/38/13
SSD DAS 406/102/5
SAN/SATA Local 529/132/3.7
SAN HA Direct 95/24/21
SAN HA RAM 207/52/9.6
SAN HA Hyper-V 34/8.6/58 (132/33/15)

-----
4K 50/50 50/50

SATA Hyper-V 181/.7/11
SATA DAS 181/.7/11
SSD DAS 2156/8.4/1
SAN/SATA Local 420/1.6/4.7
SAN HA Direct 150/.6/13.3
SAN HA RAM 1121/4.4/1.7
SAN HA Hyper-V 147/.6/13.6

-----
32K 50/50 50/50

SATA Hyper-V 159/5/13
SATA DAS 171/5.3/12
SSD DAS 1277/40/1.5
SAN/SATA Local 340/10.6/5.8
SAN HA Direct 186/5.8/10.7
SAN HA RAM 361/11.3/5.5
SAN HA Hyper-V 258/8/7.7
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Mon Feb 28, 2011 7:07 pm

Some other notes...

-Jumbo frames on in switches and NICs
-Flow Control was on everywhere, but shut off in case that was the problem (didn't seem to matter either way since the switches have 32Megabyte packet buffer)
-RSS is on with 4 queues
-All Offloading is on
-Latest drivers on all NIC's and latest firmware in switches
-Transmit/Receive NIC buffers at 2048/2048
-Static Link Aggregation for all NIC bonds
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Feb 28, 2011 7:59 pm

While checking what could be wrong w/o jumping into your system... Management questions:

1) When did you apply for support?

2) Did you get in touch with U.S. support staff initially?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Mon Feb 28, 2011 8:18 pm

Friday afternoon Eastern
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Feb 28, 2011 9:06 pm

Got it, thanks...

P.S. Still checking your numbers.
camealy wrote:Friday afternoon Eastern
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Thu Mar 03, 2011 4:54 am

Any ideas? This thing is running terrible and we have had to move most of our VM's out of it.
@ziz (staff)
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Thu Mar 03, 2011 9:23 am

camealy wrote:Any ideas? This thing is running terrible and we have had to move most of our VM's out of it.
Your tests results show bad performance only from VM on HA and mainly with 256 sequential write, from clusters hosts you get good results. This problem is related to disk alignment. Support team will contact you soon to go further with your case.
Aziz Keissi
Technical Engineer
StarWind Software
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Thu Mar 03, 2011 8:26 pm

OK, do you know how long it will be until I receive assistance? Tomorrow will be one week since I opened my case.... :(
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Mar 03, 2011 8:51 pm

Prepare remote desktop session login details to us and we'll jump on your system as soon as you'll provide these details.
camealy wrote:OK, do you know how long it will be until I receive assistance? Tomorrow will be one week since I opened my case.... :(
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Sat Mar 05, 2011 12:35 am

Requested tests performed below.

I can’t believe it is normal to drop performance by a factor of 10-25 when going from direct attached storage to properly configured iSCSI storage.

IOs per sec/MBs per sec/average response (ms)

32K Read:
Hyper-V Host- 191.53/5.99/5.22
Starwind Host- 4849.36/151.54/.20
Hyper-V VM- 160.14/5/6.24

64K Read:
Hyper-V Host- 185.90/11.62/5.37
Starwind Host- 2540.14/158.76/.39
Hyper-V VM- 148.65/9.29/6.72

128K Read:
Hyper-V Host- 140.18/17.52/7.12
Starwind Host- 1637.24/204.66/.61
Hyper-V VM- 142.79/17.85/7.00

256K Read:
Hyper-V Host- 136.89/34.22/7.30
Starwind Host- 1297.47/324.37/.76
Hyper-V VM- 126.32/31.58/7.91

32K Write:
Hyper-V Host- 122.71/3.83/8.14
Starwind Host- 7789.02/243.41/.12
Hyper-V VM- 97.46/3.05/10.25

64K Write:
Hyper-V Host- 120.82/7.55/8.27
Starwind Host- 4017.44/251.09/.25
Hyper-V VM- 97.01/6.06/10.30

128K Write:
Hyper-V Host- 114.89/14.36/8.69
Starwind Host- 1954.75/244.34/.51
Hyper-V VM- 91.70/11.46/10.90

256K Write:
Hyper-V Host- 64.08/16.02/15.59
Starwind Host- 973.37/243.34/1.03
Hyper-V VM- 55.73/13.93/17.93

32K 50/50:
Hyper-V Host- 130.20/4.07/7.67
Starwind Host- 762.39/23.82/1.31
Hyper-V VM- 117.07/3.66/8.54

64K 50/50:
Hyper-V Host- 130.96/8.18/7.63
Starwind Host- 539.11/33.69/1.85
Hyper-V VM- 119.98/7.50/8.33

128K 50/50:
Hyper-V Host- 127.34/15.92/7.84
Starwind Host- 399.68/49.96/2.50
Hyper-V VM- 109.12/13.64/9.16

256K 50/50:
Hyper-V Host- 86.46/21.62/11.56
Starwind Host- 228.18/57.04/4.38
Hyper-V VM- 74.78/18.69/13.36
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Sun Mar 06, 2011 1:49 pm

I think I may have found a problem...

...just don't know the solution. If I have Write Cache turned on for my Starwind targets, they can only flush to disk at exactly 2MB per second. I tested writing a 500MB file directly to the storage over the same SAN link and it wrote it almost instantly. If I write that same file through iSCSI to Starwind it appears to write instantly on the Hyper-V host, (because it writes to the Starwind cache), but then if you watch Perf-mon on the Starwind host it will only flush out to disk at 2MB per second?

Is this normal?
Attachments
Screen shot 2011-03-06 at 8.29.07 AM.png
Screen shot 2011-03-06 at 8.29.07 AM.png (60.62 KiB) Viewed 11040 times
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Wed Mar 09, 2011 11:08 am

This is not normal, I have escalated the case however we haven't yet managed to reproduce the issue.
Will be looking deeper into this issue
Max Kolomyeytsev
StarWind Software
camealy
Posts: 77
Joined: Fri Sep 10, 2010 5:54 am

Wed Mar 09, 2011 12:45 pm

Does any of your test hardware involve mainstream HP Proliant and Smart Array products? That is what all my customers run Starwind on, including this installation.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Mar 09, 2011 3:52 pm

There are some for sure. But there's nothing special in HP Pro and SA lines so I guess it's something different. Max should tell what exactly soon.
camealy wrote:Does any of your test hardware involve mainstream HP Proliant and Smart Array products? That is what all my customers run Starwind on, including this installation.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
hixont
Posts: 25
Joined: Fri Jun 25, 2010 9:12 pm

Wed Mar 09, 2011 6:12 pm

I'm running on HP equipment and I haven't had any problems of the nature discussed here. I'm using two DL370s for the Starwind SAN (144 GB of RAM for caching) and DL160s for the Hyper-V hosts using clustered shared volumes.
Post Reply