Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm
Tue Apr 24, 2012 4:38 pm
My setup:
1 ESXi 4.1 physical box (3.2 GHz quad core i5, 16 GB ram)
1 (w2K8 r2) starwind san/vcenter physical box (3.0 GHz dual core2duo, 8 GB ram, 3 - sata2 7200 rpm disks for the vm datastore) Write through caching enabled with 64 size for the target. Hdtune reports read speeds of 250+MBps on the three disks raid 0 array.
4 VMs (2 XP, 2 Windows7) HDtune installed on all of them.
1 separate vswitch on the ESXi host for dedicated iscsi traffic with a crossover cable going directly to the nic of the starwind san. This connection is a one to one, vmkernel to pnic. No MPIO.
Jumbo frames enabled on all nics and the virtual switch.
I have created 4 separate 40 GB vmdk one on each VM (all vmdk's are on the same datastore as the guests c: drives)
With the 4 VMs concurrently running hdtune write bench, they can saturate the link to about 90% utilization and the starwind san reports about 120MBps disk utilization (windows resource monitor).
Here is when things go horribly wrong.
When I start a read bench on the 1st vim about 50%-60% total network utilization.
When I start a read bench on the 2nd vm about 70%-80% total network utilization.
When I start a read bench on the 3rd vm about 5%-15% total network utilization.
Is this normally indicative of VMware’s throttling or is there something really wrong here?
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Apr 24, 2012 11:08 pm
Hold on for a second... Just to clarify - you add thrid VM to a working set and network utilization actually DROPS, right? What workload do you use? Can you (test purpose only) replace your benchmark tool with say SQLIO or ATTO or Intel I/O Meter (something we use locally)?
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
lohelle
- Posts: 144
- Joined: Sun Aug 28, 2011 2:04 pm
Wed Apr 25, 2012 8:06 am
I think the problem is that the RAID only have 3 drives.
It is almost the same as copying multiple folders at the same time on a regular computer. You will often get WORSE speed because the drives (non-SSD) cannot read and write randomly as fast as (mostly) sequential.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Wed Apr 25, 2012 11:23 am
I don't think so... VMs data should be scattered among different stripes on a different spindles. So IOPS don't drop down as insane as we see. But we need to get hands over OP's benchmark numbers to see what's wrong
lohelle wrote:I think the problem is that the RAID only have 3 drives.
It is almost the same as copying multiple folders at the same time on a regular computer. You will often get WORSE speed because the drives (non-SSD) cannot read and write randomly as fast as (mostly) sequential.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm
-
Max (staff)
- Staff
- Posts: 533
- Joined: Tue Apr 20, 2010 9:03 am
Wed Apr 25, 2012 6:24 pm
OK, getting closer

Can you now now provide ATTO results for 3 VMs (running concurrently of course). Thank you

Max Kolomyeytsev
StarWind Software
-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Fri Apr 27, 2012 12:51 pm
Well, first of all not only reads are dropping dramatically on 3d VM, but everything else too.
My personal opinionis that lohelle is correct in his assumption that issue is related to RAID.
Do you have chance to move VM somewhere and benchmark RAID array while it is NOT under any other load?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm
Fri Apr 27, 2012 9:02 pm
In reference to read performance I have based my findings on HDtune not ATTO. HDtune can run write bench marks concurrently from 3 VMs without issue. Its when I run HDtune read bench when it drags all the 3 VMs down. ATTO runs Read Write tests at the same time therefore, we may not notice these performance hits as much.
I have a screen shot of HDtune Read bench on the raid array on the Starwind 5.4 san with the iscsi nic to the esx disabled(no load).
https://www.dropbox.com/s/q52c10vp7iji6 ... 281%29.bmp
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Fri Apr 27, 2012 9:48 pm
Can you confirm that you are running version 5.4 of StarWind now?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Sat Apr 28, 2012 7:22 pm
Yes. Code you're running is almost 2 years old. It's official StarWind policy - to get support you need to have the most recent version and build. So start with and update for now. Last thing I want to do - waste my and my people time on a bug hunt for bugs being R.I.P.-ed years ago.
tdrivas wrote:yes 5.4.0 (build 20100708) , are there any known issues with that revision?
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm
Sun Jun 03, 2012 10:34 pm
Hello,
Sorry I was not able to get back to this post right away. I have installed the latest version of the software 5.8.xxxx
Regardless that did not help. I faced the same issue. I was looking on the internet recently and came across some recommendations for speeding up VM's performance. So as per these recommendations I moved the disks to a completely different raid disk set than the guest's c:\ drives were on. This was probably my issue, the VM's c:\ drive was fighting the IO with the virtual drives I was benching. Although still Write is better than read. Windows Server 2008 resource manager reports about 140MBps write and only 90-100MBps Reads with all the VM's read/write bench. Usually the reverse is true with read vs write performance. I get about Read:27MBps per VM and Write:34MBps per VM.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Mon Jun 04, 2012 1:45 pm
Can you post some benchmarks for array running locally and the same array connected thru iSCSI?
tdrivas wrote:Hello,
Sorry I was not able to get back to this post right away. I have installed the latest version of the software 5.8.xxxx
Regardless that did not help. I faced the same issue. I was looking on the internet recently and came across some recommendations for speeding up VM's performance. So as per these recommendations I moved the disks to a completely different raid disk set than the guest's c:\ drives were on. This was probably my issue, the VM's c:\ drive was fighting the IO with the virtual drives I was benching. Although still Write is better than read. Windows Server 2008 resource manager reports about 140MBps write and only 90-100MBps Reads with all the VM's read/write bench. Usually the reverse is true with read vs write performance. I get about Read:27MBps per VM and Write:34MBps per VM.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
tdrivas
- Posts: 7
- Joined: Tue Apr 24, 2012 12:52 pm