no Raid Options shown in test installation

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
f.sennj
Posts: 5
Joined: Mon Jun 30, 2025 5:58 am

Mon Jun 30, 2025 6:05 am

Hello all, i am on Proxmox (5 Node Cluster with Ceph right now), i wanted to try STarwind. CVM installation works without anyproblems. i then created 3 virtual disks in the CVM which are recognized by starwind, but when i want to set it to Raid 5 there is nothing shown. See Screenshot. Any clue what that could be?
Attachments
thumbnail_image.png
thumbnail_image.png (123.21 KiB) Viewed 116 times
thumbnail_image (1).png
thumbnail_image (1).png (85.46 KiB) Viewed 116 times
thumbnail_image (2).png
thumbnail_image (2).png (36.17 KiB) Viewed 116 times
f.sennj
Posts: 5
Joined: Mon Jun 30, 2025 5:58 am

Mon Jun 30, 2025 6:10 am

Version on all 3 appliances is:

1.6.580.7361

what i can do i is only add each disk one by one into a raid 0.
yaroslav (staff)
Staff
Posts: 3606
Joined: Mon Nov 18, 2019 11:11 am

Mon Jun 30, 2025 7:12 am

Welcome to StarWind Forum. Adding the VM disks in RAID will cause tons of overhead. That's why software RAID is not possible in a GUI. You can still RAID them via CLI, but it is not recommended.
f.sennj
Posts: 5
Joined: Mon Jun 30, 2025 5:58 am

Mon Jun 30, 2025 9:14 am

So for the Future when we Plan to have 8 - 12 Disks how do i do Then? ZFS local in proxmox and present one disk to VM? Or passthrough to cvm?
f.sennj
Posts: 5
Joined: Mon Jun 30, 2025 5:58 am

Mon Jun 30, 2025 9:21 am

Or shall the concept bei to Just add Them as Single Disks and all raid Logic is Not Done node wide But Like in ceph Distributed through all disks?
yaroslav (staff)
Staff
Posts: 3606
Joined: Mon Nov 18, 2019 11:11 am

Mon Jun 30, 2025 10:18 am

Or shall the concept bei to Just add Them as Single Disks and all raid Logic is Not Done node wide But Like in ceph Distributed through all disks?
You can just create a HA device on each individual disk.
So for the Future when we Plan to have 8 - 12 Disks how do i do Then? ZFS local in proxmox and present one disk to VM? Or passthrough to cvm?
I won't go for any manipulation with StarWind HA devices on the hypervisor level due to overhead. You can give it a try, but ZFS itself will be killing the performance.
What you can do is pass disks into the StarWind VSAN VM as pass-through or RDM and build the mdadm pool inside the VM.

p.s. it will be helpful if you can keep editing one reply rather than posting multiple. Thanks :)
f.sennj
Posts: 5
Joined: Mon Jun 30, 2025 5:58 am

Mon Jun 30, 2025 12:58 pm

sorry then i got a bit mislead by the ZFS and G-Raid Option, i thought your logic is different from CEPH but that means generally its the very same, you distribute evenly accross nodes + all single (JBOD) disks each node.

So i have done it like this for test?

Generally does Starwind offer Siteawareness?
Attachments
Screenshot 2025-06-30 145246.png
Screenshot 2025-06-30 145246.png (123.76 KiB) Viewed 63 times
yaroslav (staff)
Staff
Posts: 3606
Joined: Mon Nov 18, 2019 11:11 am

Mon Jun 30, 2025 1:09 pm

sorry then i got a bit mislead by the ZFS and G-Raid Option
If you have GRAID, the configuration process looks different. If you have GRAID, the hardware is to be passed into CVM and configured inside the CVM.
So i have done it like this for test?
The configuration, like you showed earlier, allows creating 1 XFS partition for each device.
Generally does Starwind offer Siteawareness?
StarWind features Active-Active storage: if synchronized, data on all replication partners is the same.

P.S. I'd strongly recommend you trial the product with one of our techs https://www.starwindsoftware.com/free-s ... vsan-trial.
Post Reply