degrading Performance

Public beta (bugs, reports, suggestions, features and requests)

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 18, 2014 10:56 am

Do you happen to have any Intel I/O Meter benchmarks for your storage setup?
Klas wrote:
AKnowles wrote:Just an FYI regarding storage spaces ... I've done some testing using storage spaves vs the standard (Computer Manager) mirrored/striped options. What I found was the native method was always faster than storage spaces. This is without using flash based caching. I personally decided that storage spaces isn't quite there yet and doesn't even come close to using a hardware raid controller anyway. Not does RFS do much for you as it doesn't support NFS shares among other things. Maybe the next generation of both storage apces and RFS will do better, but as of 2012 R2 I'm not seeing much use for it.
Hello!

I think you should reconsider Storage Spaces. Waiting for Starwind8 we have a new hyper-v cluster running storage spaces. The setup is 8 x 900 GB SAS disk and one Intel PCI 910 SSD as cache and Tier0 layer. We created one large lun with 10 GB write cache and the performance is "like" PCI SSD on all the disk.
Writes go to the SSD and read usually is on Tier0, if not, the Tier1 (SAS-disk) is mostly idling and there for responsive. If you create the hyper-v vhdx files using logical block size of 4k the performance in the VM is almost the same as on the lun and the lun performance is almost as the Intel PCI SSD.

We have ~80 concurrent active users ( 825 user accounts) and is planning to run fileserver, exchange, webserver, RDS, DB’s. AD etc. etc. on one hyper-v cluster. Fast, resilient and low footprint.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Klas
Posts: 13
Joined: Mon Aug 27, 2012 10:49 am

Tue Feb 18, 2014 1:37 pm

anton (staff) wrote:Do you happen to have any Intel I/O Meter benchmarks for your storage setup?
The bad Starwind performance can be explaned with 512 blocksize of vhd and Raid5.
The Storage spaces setup far to right is unsupported. We run 2 LUNs from a HP Raid controller and the 2 SSD disks from the SSD, a total of 4 disks and Stripe them in Storage spaces.
Starwind-Storagespaces.png
Starwind-Storagespaces.png (35.45 KiB) Viewed 6745 times
Starwind-Storagespaces2.png
Starwind-Storagespaces2.png (60.19 KiB) Viewed 6744 times
We can see a huge difference in performance, syncing our file server with DFS-R the uplink to the old file servers was the bottle neck. The new VM is behind 10 GBit, live migration network 10 GBit and, when v8 is stable, 4 1 GBit NIC as sync and 2 1 GBit NIC as initiators.
SMB 3.02 between the 2 identical cluster/hyper-v servers makes file copy or live migration, replica etc. fast. About 540 MB/s actual throughput.

It will be interesting to how Starwind L2 cache can fit in to our storage and if that can bring any benefits over just "simple" HA on storage spaces disks.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 18, 2014 1:43 pm

I don't understand what you compare. Raw disk performance vs. StarWind mapped disks and tests running inside a VMs? Can you elaborate on a a) hardware config (interconnect diagram) and b) what you actually do. Thanks!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Klas
Posts: 13
Joined: Mon Aug 27, 2012 10:49 am

Tue Feb 18, 2014 2:24 pm

anton (staff) wrote:I don't understand what you compare. Raw disk performance vs. StarWind mapped disks and tests running inside a VMs? Can you elaborate on a a) hardware config (interconnect diagram) and b) what you actually do. Thanks!
Yes, we compare raw disk performance with the performance we get from different layers, Starwind, Hyper-v etc.
a) Standard HP Proliant DL380 Gen8 v2, 2 CPU, 96 GB ram, 8 10k SAS disks behind a P420 raid controller with 512 MB cache. (our initial plan was to run behind a SAS controller H220).
The cluster have two connections to the network, 10 GBit for VM workloads and 1 GBit for management. The rest, 7x 1 GBit NIC and 1x 10 GBit is direct attach cables betwen nodes.
b) Computing? ;) What do you mean that we do?
Delo123
Posts: 30
Joined: Wed Feb 12, 2014 5:15 pm

Tue Feb 18, 2014 2:37 pm

I have now upgraded to 10Gbit.
Starwind Intel 520-T2 directly connected to 520 T2 in Vmware Host. 2 Vmkernels, seperate Subnets.
So in total 2 Sessions for each Lun (2x10Gbit MTU 9000).
IoMeter on Starwind Host: (Simple Disk, no SS)
This is IOMeter on the Starwind Host. Disk from Backend Storage, giving around 200K Iops
This is IOMeter on the Starwind Host. Disk from Backend Storage, giving around 200K Iops
4KRandomReadNative.jpg (142.58 KiB) Viewed 6738 times
IoMeter on Windows Guest (VM): Dedupe and Caching is ON. Low IOPS and i see no iops coming from the backend.
IOMeter in Windows VM. DataStore is VMFS. Caching and Dedupe are on.
IOMeter in Windows VM. DataStore is VMFS. Caching and Dedupe are on.
4kRandomReadVM.jpg (129.27 KiB) Viewed 6737 times
So i decided to turn off cache: (Still seeing no Iops on Backend)
IOMeter in Windows VM. DataStore is VMFS. Caching is off and Dedupe is on.
IOMeter in Windows VM. DataStore is VMFS. Caching is off and Dedupe is on.
4kRandomReadVMNoCache.jpg (126.95 KiB) Viewed 6738 times
Delo123
Posts: 30
Joined: Wed Feb 12, 2014 5:15 pm

Tue Feb 18, 2014 3:07 pm

Now i also turned off dedupe. So no Cache and no Dedupe. (Still seeing no traffic coming from the Backend Storage).
4kRandomReadVMNoCacheNoDedupe.jpg
4kRandomReadVMNoCacheNoDedupe.jpg (130.01 KiB) Viewed 6735 times
Atto gives somewhat decent Numbers:
AttoNoCacheNoDedupeVM.jpg
AttoNoCacheNoDedupeVM.jpg (92.2 KiB) Viewed 6735 times
But as said, almost no reads or writes are coming from the physical backend.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 18, 2014 11:02 pm

You're now measuring 10 GbE performance so it's not going to have similar bandwidth and latency like SAS. However the numbers are too low so I'll ask engineers to have a remote session with you (mostly to check config). Thanks!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
AKnowles
Posts: 27
Joined: Sat Feb 15, 2014 6:18 am

Wed Feb 19, 2014 4:30 am

anton (staff) wrote:Do you happen to have any Intel I/O Meter benchmarks for your storage setup?
I have them archived, but didn't label them so it might be a bit difficult to dig out the right ones. I can provide the current stats for my current configuration using an Arcea 1261, 2 GB cache, 10 Seagate 7200 RPM SATA drives and the Samsung 840 Pro SSDs on internal 6 GB SATA controller. With 4 NICs and MPIO I think it is doing pretty well. I did find that V8 RAM caching alone was better than L1 & L2 caching based on the IO Meter results.

I can reconfigure the server though and run the initial tests again, I might have to swap out the raid controllers as for my initial usage I used a LSI controller in IT mode with 8 Seagate 7200 RMP SATA disk drives. These were at 3 GB. I did some additional testing with two Samsung 840 Pro SSDs on 6 GB internal SATA controller. I should also mention that my initial testing was all done with the native (Server 2012 R2) iSCSI Target software.

I also tried various combinations for storage tiering using Powershell commands so I could specify larger write cache sizes. And granted, I'm not using server level hardware, but expected better performance than I got using Storage Spaces over native mirroring/striping.

If you really want to see the numbers, I'll rebuild it and post it over the weekend.
Delo123
Posts: 30
Joined: Wed Feb 12, 2014 5:15 pm

Wed Feb 19, 2014 1:25 pm

Thx, i will see that i "clean up" a bit so we have something ready for testing... Let me know
anton (staff) wrote:You're now measuring 10 GbE performance so it's not going to have similar bandwidth and latency like SAS. However the numbers are too low so I'll ask engineers to have a remote session with you (mostly to check config). Thanks!
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Feb 19, 2014 3:07 pm

We have some assumptions for now so QA would contact you to arrange remote session. Thank you for cooperation!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply