Configure HA and enable Write-Back cache. Not sure what's the reason of you sticking with single node setup. What hypervisor do you use? I'ts my home-setup. Starwind itself isn't virtualized, so if i go HA I will need a second starwind box, which means another PE2950, MD1000 disk enclosure and 15 b...
Looks good! PS Turn StarWind caching ON :) Haha :) It seems that, in my current setup, write performance suffers a bit from enabling Write Through-Cache. Reads improve a bit. So i'm not quite sure what I'm going to do in my final setup. Since my 24/7 running SAN isn't going to have HA, Write-Back i...
Looks good indeed. Modifying the TcpAckFrequency register setting increases performance dramatically. Using StarWind 5.7 and MS iSCSI initiator i'm getting writes of 3.5MB/sec @ 0.5k upto 220MB/sec @ 64k and higher, QD10. After my struggle getting the ESXi initiator to work it seems that, in the low...
The system is using a PERC 5/E adapter, which has 256MB DDR2 memory for its cache. It is enabled, but I don't know how much this 256MB will help. And yes, it's battery-backed and the write policy is write-back. How did you set disk cache policy? Using a similar MD1000 + Perc-5/E Setup I disabled it...
Yes. Both initiator and target sides please. i have the same problem with v5.7 Why do u say to put 2 instead of 1 ? 1 disables it right ? During some tests in the past I found that disabling Delayed Ack only on the initiator side (using MS initiator) gave a slightly better performance than disablin...
I used a completely seperate subnet for each iSCSI channel. including seperate vSwitches and kernel interfaces. Did you setup each of your iSCSI NIC's in ESXi in separate virtual switches? When I had a single virtual switch and the standard 1000 IOPS I got about 6 MB/s. Going to separate virtual swi...
Then I made the most important change. I went into the console of the ESXi servers and changed the number of IOPS per path from 1000 to 1. This made the writes 10X as fast. I didn't mention tis when posting my results, but I used 3 IOPS per path for this. I will see if I can compare the results usi...
Since I'm going to use some sort of bundling of 1gig connections in my final setup I started to do some tests using Multipathing. Same setup as before, just enabled the second onboard broadcom nic on each server and configured MP following best practices. Both iSCSI paths are on seperate /27 subnets...
The ESXi setting seems to have a little different meaning indeed. It has more to do with detecting network performance and congestion. Mechanisms seem to be a little different on both sides which results in poor performance. I've had some background about this when doing some cisco self-study and ba...
Well, seems unchecking a little box solved everything :? I just found out ESXi also has a delayed ack setting, which is on by default. After disabling and rebooting the ESXi box speed went up, clean 2k8 install on iSCSI datastore completes within 14 minutes, like it should :) vSphere Client, Configu...
Same thing I discovered during my tests with other software. All windows based targets seem to have this ESXi performance issue. Linux performs great and last week I did some tests using ZFS (OpenIndiana) which also doesn't have the issue. This still sucks/// Good news (not for you and others unfort...
Below you can find taskmanager screenshot of the full test. Since I have tested some other iSCSI targets, this one is taken after a clean installation of the StarWind server. I also switched from the demo to a licensed (thanks Anton) version. Only modification made on starwind side so far is the MTU...
I have tested using VMDK Datastore. 80 Gig iSCSI Target formatted as one big VMDK Datastore: - 50 gig allocated to the Windows VM as second disk for testing with Atto. Used local datastore for OS Disk. - 18 gig allocated as OS disk to another VM for the non-scientific "installation timing"...