vSphere 5 and iscsi settings

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
couch83
Posts: 3
Joined: Tue Feb 28, 2012 10:40 pm

Fri Mar 02, 2012 6:17 pm

Hello

I have 2 esx hosts.

1 running local storage and one connected to starwind.

Vmotion from local storage on HOST-A to starwind on HOST-B with approx. 12 gigs of data takes about 1 hour.

If i VMotion the other way, HOST-B on starwind to HOST-A wiht the same data, it takes 4 minutes.

IM trying to sort out why it is so slow writing to starwind.

Ive got 2 nics in each host (1gb each) bound to both hosts for storage. The vSwitches are set to an MTU of 9000 as well as the port groups. I have jumbo frames turned on in our switch on the vmotion vlan as well as well as the storage VLAN. On my vmware storage i have vmware mutipath and round robin turned on for my datastore.

I am at a loss as to what to check next.

In my starwind box i have 1gb nic card setup on my target.

I have more i can use (team) but id like to get the performance issue solved with one nic before i introduce teaming.

Can someone direct me as to what to check next? Is there any documenation out there on how to configure vmware/starwind optimally? I have to be missing somthing.


Ive checked my swiches and have not found any send / recieve errors durring the vmotion. No dropped packets or issues there that i can see.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Sun Mar 04, 2012 7:41 pm

Dear couch3

We need to know the following information from you in order to provide a solution:
1. Operating system on servers participating in iSCSI SAN and client server OS
2. RAID array model, RAID level and stripe size used, caching mode of the array used to store HA images, Windows volume allocation unit size
3. NIC models, driver versions (driver manufacturer/release date) and NIC advanced settings (Jumbo Frames, iSCSI offload etc.)
4. Network scheme.
Also, there is a document for pre-production SAN benchmarking:
http://www.starwindsoftware.com/images/ ... _guide.pdf
And a list of advanced settings which should be implemented in order to gain higher performance in iSCSI environments:
http://www.starwindsoftware.com/forums/ ... t2293.html
http://www.starwindsoftware.com/forums/ ... t2296.html

Please provide me with the requested information and I will be able to further assist you in the troubleshooting
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
couch83
Posts: 3
Joined: Tue Feb 28, 2012 10:40 pm

Mon Mar 05, 2012 4:34 am

Thanks. Here are some more details:

1. Operating system on servers participating in iSCSI SAN and client server OS

The starwind box is server 2008 r2 Enterprise. Our ESX hosts are running vSphere 5 Standard. HP Procurve 2910 switch running jumbo frames.

2. RAID array model, RAID level and stripe size used, caching mode of the array used to store HA images, Windows volume allocation unit size.

Our storage box has a LSI MegaRaid SAS 9260-4i controller. Raid 50.

3. NIC models, driver versions (driver manufacturer/release date) and NIC advanced settings (Jumbo Frames, iSCSI offload etc.)

Intel 82576 Gigbit Dual Port. Driver version 11.7.21.0 Date: 12.1.2010. Jumbo Frame settings on the card: 9014. There are no specific ISCSI settings in the advanced area on the card.

4. Network scheme.

storage network is isolated via its own switch running jumbo frames.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Mar 05, 2012 8:39 am

OK, but we would still appreciate if you could provide us the diagram (visio). We need to know how everything is connected (switches, etc.)
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
couch83
Posts: 3
Joined: Tue Feb 28, 2012 10:40 pm

Mon Mar 05, 2012 6:59 pm

It is pretty basic right now.

IN the end their will be more than 1 host but right now im trying to get performance on 1 host and then replicate that across all hosts once i know it is working correctly.

I attached a visio of what i have currently setup.
(Update) i tried to attach a visio and also pdf but it wouldnt let me. I attached it to the case i have open.

Nothing overly complex.

2 nics from each host into our back end switch on vlan20
Currently 1 nic from our starwind box to our backend switch on vlan20

Vmotion is in its own vlan and vswitch as are the virtual machines.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Mar 09, 2012 10:34 am

OK, have you took a look at the Benchmarking guide and performed the tests? Can you share some results with us?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply