ESXi iSCSI initiator WRITE speed
Posted: Mon Mar 28, 2011 8:39 pm
As stated here http://www.starwindsoftware.com/forums/ ... tml#p12990, I'm in the process of testing some scenario's for a new lab environment.
To test the maximum performance of the iSCSI setup I started quite simple:
One Dell Poweredge 2950 server (2x dual core Xeon 3.0, 4 gig ram, 2x 73G SAS 15K local, iSCSi offload) running W2k8 R2 + starwind 5.6.
One Dell MD1000 disk enclosure connected to above using dual 4GB SAS lines to dell PERC-5 128MB cache w. battery backup / write back enabled. For testing purposes I created 2 arrays:
- 6-disk RAID50 SAS 15K Array
- 4-disk RAID5 SATA 7.2K Array
One Dell Poweredge 2950 server (2x dual core Xeon 3.0, 12 gig ram, 2x 146G SAS 15K local, iSCSI offload) running VMWare ESXi 4.1 Update 1
Both servers are connected straight through two gigabit ethernet lines (iSCSI A and B) using the onboard broadcom gigabit NIC's, this to eliminate switch misconfiguration problems etc. Additional Intel Pro 1000 dual port NIC's are used for management LAN. Jumbo frames enabled on Starwind side and VMWare side (vswitch and kernel interface).
At the StarWind side I created 4 targets: Quorum (512 MB SAS), Cluster Data 1 (50GB SAS), Cluster Data 2 (1TB SATA), ESXi Data (75GB SAS), all targets are configured cache mode normal (no caching). I also made the TCP\IP setting changes as recommended.
All SAS traffic is going through iSCSI A and all SATA traffic is going through iSCSI B. At the moment I'm only testing SAS.
At the VMWare side I created 4 virtual machines W2k8 R2 (DC, 2 Fileservers running cluster mode and one stand alone domain-member). All OS-es are running off local disks.
The fileservers are connected through the MD1000 using MS iSCSI initiator, the stand alone server has a second 50GB disk connected to the MD1000 through ESXi iSCSI initator.
The big issue I'm having is a significant performance difference between the W2k8 iSCSI initiator and the ESXi initiator.
The virtual machines using W2k8 initiator seem to perform nice and stable:
The virtual machines using the esxi initiator.. that's a different story. Using direct io no problems (except the 128K is a bit strange), using cache the write speed is horrible:
To be sure, I also tested this on a Win7 VM, and at the moment I am installing W2k8 on a VM using the iSCSI storage as OS-Disk. The install runs for 25 minutes now and it isn't even half way. Using local storage (and Openfiler iSCSI Target) the complete install of the enterprise version finishes within 15 minutes.
Any clue what this could be? The iSCSI path for both cases is exactly the same. Only difference is that the 2k8 initiator connects through a vmware network to the vSwitch while the ESXi initiator connects through a vmkernal network to the vSwitch.
To test the maximum performance of the iSCSI setup I started quite simple:
One Dell Poweredge 2950 server (2x dual core Xeon 3.0, 4 gig ram, 2x 73G SAS 15K local, iSCSi offload) running W2k8 R2 + starwind 5.6.
One Dell MD1000 disk enclosure connected to above using dual 4GB SAS lines to dell PERC-5 128MB cache w. battery backup / write back enabled. For testing purposes I created 2 arrays:
- 6-disk RAID50 SAS 15K Array
- 4-disk RAID5 SATA 7.2K Array
One Dell Poweredge 2950 server (2x dual core Xeon 3.0, 12 gig ram, 2x 146G SAS 15K local, iSCSI offload) running VMWare ESXi 4.1 Update 1
Both servers are connected straight through two gigabit ethernet lines (iSCSI A and B) using the onboard broadcom gigabit NIC's, this to eliminate switch misconfiguration problems etc. Additional Intel Pro 1000 dual port NIC's are used for management LAN. Jumbo frames enabled on Starwind side and VMWare side (vswitch and kernel interface).
At the StarWind side I created 4 targets: Quorum (512 MB SAS), Cluster Data 1 (50GB SAS), Cluster Data 2 (1TB SATA), ESXi Data (75GB SAS), all targets are configured cache mode normal (no caching). I also made the TCP\IP setting changes as recommended.
All SAS traffic is going through iSCSI A and all SATA traffic is going through iSCSI B. At the moment I'm only testing SAS.
At the VMWare side I created 4 virtual machines W2k8 R2 (DC, 2 Fileservers running cluster mode and one stand alone domain-member). All OS-es are running off local disks.
The fileservers are connected through the MD1000 using MS iSCSI initiator, the stand alone server has a second 50GB disk connected to the MD1000 through ESXi iSCSI initator.
The big issue I'm having is a significant performance difference between the W2k8 iSCSI initiator and the ESXi initiator.
The virtual machines using W2k8 initiator seem to perform nice and stable:
The virtual machines using the esxi initiator.. that's a different story. Using direct io no problems (except the 128K is a bit strange), using cache the write speed is horrible:
To be sure, I also tested this on a Win7 VM, and at the moment I am installing W2k8 on a VM using the iSCSI storage as OS-Disk. The install runs for 25 minutes now and it isn't even half way. Using local storage (and Openfiler iSCSI Target) the complete install of the enterprise version finishes within 15 minutes.
Any clue what this could be? The iSCSI path for both cases is exactly the same. Only difference is that the 2k8 initiator connects through a vmware network to the vSwitch while the ESXi initiator connects through a vmkernal network to the vSwitch.