Well, ZFS is a non-starter. If this was not HA, I'd say screw it and set sync=disabled, since a crash will take the client out as well as the storage (just like pulling power cord). Unfortunately, in the HA storage scenario, since sync=disabled causes ZFS to lie to the iscsi daemon, the remote client will believe the write went to stable storage. If the hyper-v host (or the VSA!) crashes after the ACK to the iscsi client, but before the 5-second flush to the pool, starwind code will believe that block X on host A is up to date, whereas it never actually got written

. I did experiments both ways (using an omnios VSA), and get approximately this:
sync=disabled
read = 1000MB/sec write = 500MB/sec
sync=standard (default)
read = 1000MB/sec write = 100MB/sec <=== this is no better than a consumer grade SATA spinner!)
My next thought was SS "raid10", but it isn't really raid10 - the only guarantee you have is every chunk will be replicated on another disk, so unlike real RAID10, where a 2nd disk failing is OK unless the it was the partner of the already failed disk, you are almost certain to lose some data with SS. So, since I have only about 250GB of VM disks, I intended to create 2 SS mirrors, one for local storage (which will house a domain controller), and one for starwind cluster storage.