Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)
-
IceStrike
- Posts: 4
- Joined: Wed Apr 15, 2015 9:23 am
Wed Apr 15, 2015 12:03 pm
I wanted to setup a faster NFS datastore for ESXi 6.0 server to host ISO. I also wanted to setup SMB share on the same folder so I can copy files easily.
Currently I am using windows 2012-R2 server to setup both the services and all works as expected. I am using 10Gbps NIC between servers.
I am getting 1.2 GBps transfer speed locally or on network while transferring ISO files on the existing setup. (By Clock around 3 min 15 sec for 225GB of data)
Have tested this using SMB from inside a VM and as NFS datastore mounted on ESXi Machine.
I read about Starwind and its feature of caching using RAM and decided to set it up.
I selected a Physical disk and created a disk bridge on the same machine using the same disk which was used for my SMB+NFS share previously.
I found out that even if I tried to access the iSCSI on the same local machine no new disk was getting populated and nothing seems to be changed.
In the current configuration everything seems to go directly bypassing starwind software.
I then removed the diskbridge and used the Virtual disk on the same drive.
(Note: Have formatted the volume every time so there will be no fragmentation issues)
Configured the virtual drive to use 42GB RAM cache and 1TB of size.
Added the virtual disk on the same server via iSCSI and formatted using NTFS.
Tested the file copy locally and over the network and there is a large variation from 600MBps to 0KBps.
Also the time taken to copy 225GB of ISO files took almost 20 min.
I checked for the Virtual file on the disk and it was split in 500MB multiple files.
I removed the virtual disk from Starwind and formatted the physical disk again and configured it up using windows as it was earlier and got the previous performance back.
Just wanted to know if am I doing anything wrong ?
I am using Starwind Build 8.0.7852
-
Oles (staff)
- Staff
- Posts: 91
- Joined: Fri Mar 20, 2015 10:58 am
Mon Apr 20, 2015 2:48 pm
Hello IceStrike,
To provide you with the answer I need to know,what RAID array model, RAID level and stripe size used, caching mode of the array used to store HA images and Windows volume allocation unit size.
Also, could you please try doing the same with thick provisioned img file, applying 25Gb L1 cache.
To investigate your issue with disk bridge, we will require StarWind logs, please submit a support ticket and attach them to it.
Thank you for your patience.
-
IceStrike
- Posts: 4
- Joined: Wed Apr 15, 2015 9:23 am
Thu Apr 23, 2015 7:16 am
My purpose of the Setup is to use and SMB Share to write the ISO generate by the BUILD servers which will be used by ESXi servers via NFS to mount those ISO and run Automation.
I cannot use iSCSI is because there are multiple BUILD servers which generate ISO and cannot share the iSCSI disk on all. So have to use SMB mounts on them.
Currently I am doing testing so there is no other task running apart from what I am performing.
The server which I am using to host NFS is a physical server and used primarily to perform write operations.
So the issue of Thin/Thick should not arise as my source is capable of transferring data at full bandwidth on 10Gbps link.
RAID Configuration:
8 * 600GB Disk of 15K RPM in RAID 0 with stripe size of 128K
Controller has cache of 512MB set for 100% Write.
My Starwind config is like this:
Windows server 2012-R2 with 64GB Ram
C:\ for OS and E:\ for DATA and both are on the disk 0 (Physical)
E:\ has my Starwind image file mounted as S: via local iSCSI Initiator.
Starwind is configured to use 60GB of RAM Cache and 4 GB for OS.
If I do a windows copy operation to an SMB share on E:\ (Without Starwind) I get a consistent 1.2GBps speed. It takes around 3 min for 225GB of data.
If I do a windows copy operation to an SMB share on S:\ (With Starwind) I get a variable speed from 600MBps to 0Kbps and it takes around 20 min for 225GB of data.
If I configure Windows NFS on the S:\ and add it to a ESXi 6.0 box the performance is more slower.
I hope this clears the configuration part. Currently without Starwind all my configuration is running perfectly and utilizing full hardware potential.
My purpose of trying Starwind was to utilise the 60GB of Free RAM so my Writes will be smooth and less random.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Thu Apr 23, 2015 12:08 pm
1) Something goes wrong. Start with removing SMB and NFS shares on top of an iSCSI volumes and making sure raw block access with StarWind works @ appropriate level.
2) Using "copy" is not a way to test performance. Use Intel I/O Meter or DiskSPD. Of course after you'll troubleshoot what's wrong with 1).
http://blogs.technet.com/b/josebda/arch ... stead.aspx
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
IceStrike
- Posts: 4
- Joined: Wed Apr 15, 2015 9:23 am
Fri Apr 24, 2015 10:33 am
May be copying a file is not the wright way to measure the performance but that is the task of the File server.
If Starwind is taking 20 min to do the task done by plain OS in 3 min how is it going to change by running different benchmark utilities.
The task of my system is to do file copy operation, so the way OS writes the file (good or bad) its not going to change.
Any ways still I tested the mounting the iSCSI volume created by Starwind to a remote server and tested local performance on that iSCSI volume and got the similar slow results.
Well I found another software which almost quadrupled the speed of my write operation on my local File server volumes but cannot get the benefit of it over network as my NIC card is of 10Gbps only.
Also not sure if it converts multiple small random writes to large sequential writes so I think I will drop the idea of using Starwind caching feature and stick to my regular plain OS.
Thanks for the support.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Apr 28, 2015 1:01 pm
You've mis-undersood me... I'm not claiming everything is OK just an opposite (see "1" statement I gave).
We need to run proper tests and provide us with a logs isolate the issue and figure out what's wrong with configuration (your case) as a phase 1 of our troubleshooting process (that includes remote sessions and so on).
If you're willing to cooperate then we'll be happy to help.
IceStrike wrote:May be copying a file is not the wright way to measure the performance but that is the task of the File server.
If Starwind is taking 20 min to do the task done by plain OS in 3 min how is it going to change by running different benchmark utilities.
The task of my system is to do file copy operation, so the way OS writes the file (good or bad) its not going to change.
Any ways still I tested the mounting the iSCSI volume created by Starwind to a remote server and tested local performance on that iSCSI volume and got the similar slow results.
Well I found another software which almost quadrupled the speed of my write operation on my local File server volumes but cannot get the benefit of it over network as my NIC card is of 10Gbps only.
Also not sure if it converts multiple small random writes to large sequential writes so I think I will drop the idea of using Starwind caching feature and stick to my regular plain OS.
Thanks for the support.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software
