Klas wrote:Hello!AKnowles wrote:Just an FYI regarding storage spaces ... I've done some testing using storage spaves vs the standard (Computer Manager) mirrored/striped options. What I found was the native method was always faster than storage spaces. This is without using flash based caching. I personally decided that storage spaces isn't quite there yet and doesn't even come close to using a hardware raid controller anyway. Not does RFS do much for you as it doesn't support NFS shares among other things. Maybe the next generation of both storage apces and RFS will do better, but as of 2012 R2 I'm not seeing much use for it.
I think you should reconsider Storage Spaces. Waiting for Starwind8 we have a new hyper-v cluster running storage spaces. The setup is 8 x 900 GB SAS disk and one Intel PCI 910 SSD as cache and Tier0 layer. We created one large lun with 10 GB write cache and the performance is "like" PCI SSD on all the disk.
Writes go to the SSD and read usually is on Tier0, if not, the Tier1 (SAS-disk) is mostly idling and there for responsive. If you create the hyper-v vhdx files using logical block size of 4k the performance in the VM is almost the same as on the lun and the lun performance is almost as the Intel PCI SSD.
We have ~80 concurrent active users ( 825 user accounts) and is planning to run fileserver, exchange, webserver, RDS, DB’s. AD etc. etc. on one hyper-v cluster. Fast, resilient and low footprint.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software