Recommended settings for ESX iSCSI initiator.

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Constantin (staff)

Fri Dec 10, 2010 3:47 pm

Configuration of ESX host
  • ESX iSCSI initiator fully supports NIC binding and MPIO (multipath) but VmWare doesn`t support using them in aggregation mode at the same time.
  • Please refer to the 2nd post in this topic if you decide to assign active-passive NIC port configuration like described in this document.
  • If you use NFS and iSCSI storage on one host you should use NIC teaming for both and don`t use MPIO.
  • Using "IP Hash" load balancing policy with NIC teaming, but will limit you with 1 iSCSI connection per host. Also you can take a look at VmWare KB article 100737 about troubleshooting and proper configuration of IP Hash load balancing policy.
  • Default path failure detection is 300 sec. VmWare doesn`t recommend to change, but if you like you can do it going to Advanced settings of ESX and change value of option Disk.PathEvalTime from 30 to 1500.
Configuration of Windows VMs
You can set up disk I/O timeout in Windows registry in following key

Code: Select all

Default values are:
  • Without VMware Tools installed (non-cluster) - 10 seconds
  • Without VMware Tools installed (cluster node) - 20 seconds
  • With VMware Tools installed - 60 seconds
Configuration of Linux VMs
You can set up I/O timeoutwith following command:

Code: Select all

cat /sys/block/<disk>/device/timeout
Default values are:
  • Without VMware Tools installed = 60 seconds (for CentOS)
  • With VMware Tools installed = 180 seconds
Please note, that 60 seconds is adequate timeout for most cases.

Using recommended MPIO policy
To achieve higher performance we recommend to use Round Robin policy in iSCSI initiator. By default ESX(i) uses FixedPath policy. You can change it to RR by choosing "Manage path" option in properties of datastore or target (can be found in "Datastore" or "Storage devices" menu).
Either by default ESX(i) switches to another path every 1000 IOPS. To improve bandwidth and performance you can change this value to lower. You can do it following way:
Connect to ESX(i) with SSH or open local console, and type in

Code: Select all

esxcli nmp roundrobin setconfig --device [UUID] --iops 1 --type iops

Code: Select all

esxcli storage nmp psp roundrobin deviceconfig set -d [UUID] -t iops -I 1
UUID of device you can found in "Storage adapters", while amount of IOPS 1 is the best recommended option. You can check properties of your device with following command:

Code: Select all

esxcli nmp roundrobin getconfig --device [UUID]
Please note that all setting provided above require testing to find best configuration for your environment.
User avatar
Max (staff)
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Thu Mar 21, 2013 2:32 pm

If you decide to configure multipathing configuration using port binding here is the optimal path distribution logics for 2 vSwitches:

Active - path to SAN1
Passive - path to SAN2
Active - path to SAN2
Passive - Path to SAN1

This way a SAN failure is processed quicker.
If both active and passive pathes point to one SAN, ESX will first wait for 2 Disk.PathEvalTime periods before it switches to another SAN completely.
Max Kolomyeytsev
StarWind Software