how to set up vsan with hyperv?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 04, 2024 8:36 pm

I've been reading various whitepapers and such here, and am a bit lost. I've moved off ESXi to proxmox, and am wanting to move off that to hyperv. I have 3 hosts with 3 1tb nvme drives, but want to go with a 2-host setup. I seem to be seeing that the linux VSA doesn't support running on hyperv? I saw some article on startwindsoftware.com that said 'no longer supported for hyper-v or kvm hypervisors"? If this is the case, I'd have to use the windows install and use storage spaces? I plan to pull 2 of the 1tb nvme from host 3 and move 1 of each to host 1 and host 2, and set up a 2x2 raid10 on each host. Is that doable? Some clarification would be much appreciated! Thanks!

ETA: this is the article I mentioned:

https://www.starwindsoftware.com/resour ... h-hyper-v/
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 04, 2024 9:03 pm

The plan you mentioned makes sense and is doable.
As a side note, I'd like to mention that StarWind CVM can be deployed on Hyper-V as described here
CVM https://www.starwindsoftware.com/resour ... ng-web-ui/
VSA https://www.starwindsoftware.com/resour ... ws-server/
Speaking of the CVM, you can use the Management Console (https://starwind.com/tmplink/starwind-v8.exe) for monitoring only.

You can also install bare-metal StarWind VSAN from https://starwind.com/tmplink/starwind-v8.exe with storage spaces.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 04, 2024 9:05 pm

ok, so what does that red-line comment about not being supported on hyper-v and kvm mean?
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 04, 2024 10:27 pm

I think I misunderstood your initial message. I sincerely apologize for any misunderstanding caused.
VSA is the deprecated product that was re-shaped into StarWind VSAN for vSphere (it can also be deployed on Hyper-V in some scenarios) and in its new generation into StarWind CVM (can also be deployed on Hyper-V).
If you need software RAID (MDADM), go CVM path.
If you still prefer Storage Spaces as software RAID, can use hardware RAID in Windows, or can configure the HA device per disk, go with bare-metal installation.
Hope I resolved the confusion caused.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Fri Jan 05, 2024 2:22 pm

ok that makes sense. you said 'mdadm' can i use zfs instead?
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Fri Jan 05, 2024 5:06 pm

Sure.
But ZFS may cause a performance drop.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Tue Jan 09, 2024 7:50 pm

How much of a drop? I don't care about small decreases, but really want the security of bit-rot protection, as well as transparent lz4 compression (I've found virtual workloads to compress as high as 2X).
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Tue Jan 09, 2024 8:33 pm

Got it. ZFS seems to fit well in your system then.
Speaking of compression and dedupe, we are planning to move forward with it. Stay tuned!
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 11, 2024 3:52 pm

before committing to this, I wanted to test performance in a virtual appliance setup. my storage: 2 1TB samsung 960 nvme drives. i configured a zfs appliance on local datastore (omnios, since it does zfs natively). i passed the 2 nvme drives to the the VM. in the VM, once up and running, I configured a zfs mirror using them. i set sync=disabled to see what the best case was. i set up a vswitch (not external) so hyper-v could see the LUN i provided (which was on a file, not a zvol, again for best performance, since historically, zvols seem slower than files on a dataset). i then spun up a ws2019 VM on that datastore, installed and ran crystaldiskmark (not the best I know), and got about 500MB/sec read/write. I thought an NVME mirror should be better than that, so I took down the zfs appliance and created a storage spaces mirror on the same 2 nvme drives. migrated the ws2019 appliance there and re-ran the tests. 3GB/sec read/write! i understand there is SOME overhead in a virtual storage appliance, but this seems excessive, no? i tried jumbo frames and it didn't really help. i wonder if this is related to iSCSI? i've read in starwind forums about multiple iSCSI sessions or something like that. Any ideas would be appreciated...
Last edited by danswartz on Thu Jan 11, 2024 8:29 pm, edited 3 times in total.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 11, 2024 4:39 pm

Thanks for your update. Yes, multiple iSCSI Sessions is the key. for an NVMe disk, I'd be using 4-5 iSCSI sessions per link.
For improving large blocks, align FirstBurstLenght with MaxBurstLenght (regedit -> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000X\Parameters
Also, did you try to run FIO on the ZFS pool inside the VM?
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 11, 2024 7:45 pm

Ah, thanks! I'll give that a shot!
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 18, 2024 8:44 pm

yaroslav (staff) wrote:The plan you mentioned makes sense and is doable.
As a side note, I'd like to mention that StarWind CVM can be deployed on Hyper-V as described here
CVM https://www.starwindsoftware.com/resour ... ng-web-ui/
VSA https://www.starwindsoftware.com/resour ... ws-server/
Speaking of the CVM, you can use the Management Console (https://starwind.com/tmplink/starwind-v8.exe) for monitoring only.

You can also install bare-metal StarWind VSAN from https://starwind.com/tmplink/starwind-v8.exe with storage spaces.
I have to admit, the bare-metal route is tempting. I intended to use 4 1TB nvme drives on each side of the mirror, in a 2x2 raid10, but am seeing conflicting stories online about whether storage spaces supports raid10. If using SS, can I assume vsan just sees one large 'disk' on each side of the mirror? I'm used to zfs where I can set compression, checksum, etc, so still learning the hyper-v stuff...
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 18, 2024 9:27 pm

If using SS, can I assume vsan just sees one large 'disk' on each side of the mirror?
VSAN sees the storage underneath as one "array" to put the disks on. It can be a Storage Spaces pool, or it can be a ZFS pool.
In storage spaces you can do mirrored volumes, I believe. All I want to mention is storage spaces may be tricky, especially when it comes to Windows Server updates.

Let me know if there's anything I can help you with!

P.s. do not ever run cluster validations on the storage with storage spaces (that kills an array).
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Jan 18, 2024 9:56 pm

yaroslav (staff) wrote:
If using SS, can I assume vsan just sees one large 'disk' on each side of the mirror?
VSAN sees the storage underneath as one "array" to put the disks on. It can be a Storage Spaces pool, or it can be a ZFS pool.
In storage spaces you can do mirrored volumes, I believe. All I want to mention is storage spaces may be tricky, especially when it comes to Windows Server updates.

Let me know if there's anything I can help you with!

P.s. do not ever run cluster validations on the storage with storage spaces (that kills an array).
thanks, i'll stick with zfs for now. the delay getting this going was due to having to migrate things from proxmox....
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 18, 2024 10:26 pm

That's quite a project if it involves cross-hypervisor migration.
Good luck! Keep me posted.
Have a nice weekend.
Post Reply