The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
anton (staff) wrote:Did you read this thread:
http://www.starwindsoftware.com/forums/ ... t2386.html
And followed suggestions from here:
http://sustainablesharepoint.wordpress. ... ith-iscsi/
It references Hyper-V but TCP stack tweaks INSIDE guest Windows virtual machines are the same.
?
@ziz (staff) wrote:Hi,
What is the stripe size used for your RAIDs?
Did you applied the recommended setting for ESX iSCSI initiator? Here is the link http://www.starwindsoftware.com/forums/ ... t2296.html
Actually, the issue seems to be related to non-similar block size on the different levels of your SAN, I mean you may use a stripe size on your RAIDs, another block size for your VMFS, windows will use a third block size and cache is using 64k as block size, and as a results you get low write performance.
Solution is to use a similar block size on all your disks. And from our side we are now working on a new build which will allow to choose the block size for the cache. We will release it soon.
anton (staff) wrote:1) You cannot change cache line size yourself. You need to wait for our engineers to change StarWind's Cache Manager to support this. So what you did is only half part of the work done, let us do our part for now.
2) I've been talking about altering Queue Depth size. ESX usually has at least 6-8 I/Os pending and request size of 64KB for read and 32KB for write. Make ATTO show I/O pettern results more or less close to your hypervisor load.
CyberNBD wrote:Direct IO means the test is performed without system caching. At first glance that should mean there's something wrong at the initiator system side but:
- when I use MS initiator I don't have this performance problem
- using other iSCSI target software i don't have the problem
- performing a clean OS install on VM using StarWind iSCSI target takes forever compared to local or other software target.
1) According to the StarWind software Cache mode Normal means no caching. This is the default setting when I create a target?
2) No specific reason for this. I have multiple targets so I did divide available system memory amongst them. I'm going to disable caching on the MS targets anyway because there's no difference in performance so I will test the ESXi target again using 2048MB? cache size. (StarWind box has 4 Gig memory for now).
3) Allright. Hope this can be solved. Reading some other topics it's not only me having these VMWare performance issues.