vSAN on a single bare metal server?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
LT286
Posts: 1
Joined: Fri Feb 16, 2024 6:50 pm

Fri Feb 16, 2024 7:10 pm

Hi guys,

I am new to this so please pardon me if my questions look stupid or naïve.
I have one bare metal DELL R750 server that I have to set up mimicking a data center with a server, an HPC and storage nodes. I use OL8 as KVM hypervisor that runs two OL8 guest VMs for server and HPC, and I was thinking to use Starwind vSAN to control two RAID arrays that both should be accessible to both VMs. These RAIDs physically are parts of the BM machine.
All setup documents that are available on the SW website are speaking entirely about high availability, complex networking, etc. In my case none of these things are applicable. Does anyone know if the Starwind vSAN free appliance can be used in my single rack server setup? Where can I read about such setup if it is possible?
Thank you!
yaroslav (staff)
Staff
Posts: 3579
Joined: Mon Nov 18, 2019 11:11 am

Fri Feb 16, 2024 7:31 pm

Welcome to StarWind Forum.
Yes, you can use StarWind VSAN as an iSCSI-connected (soon NVMe-oF-connected) storage array!
Feel free to contact me if you have more questions.
vertigomike
Posts: 7
Joined: Tue Feb 25, 2025 10:57 pm

Mon Mar 10, 2025 1:46 am

In same boat here. I've trying to connect a 740xd that i've installed Starwinds on. That portion is fine. On the vsan box I've got a single nic for management and two interfaces in two separate subnets that i have setup for data. I'm trying to ensure the actual iscsi traffic goes over the data interfaces. trying to initiate the iscsi though if i point to interface on the data interfaces it finds nothing but finds the initiator if i point to the mgt interface. Trying to figure out what I'm missing or not understanding here.
yaroslav (staff)
Staff
Posts: 3579
Joined: Mon Nov 18, 2019 11:11 am

Mon Mar 10, 2025 5:15 am

Firewall rules and Access rights. If the client host is ESXi, see if there is any port binding.
vertigomike
Posts: 7
Joined: Tue Feb 25, 2025 10:57 pm

Mon Mar 10, 2025 3:10 pm

It's a proxmox cluster. i didn't have any luck pointing it to the data interface IPs but it would point to the mgt interfaces. I assume its using that as the transfer is super slow. Difference in 1g interface vs 2 40g interfaces.
vertigomike
Posts: 7
Joined: Tue Feb 25, 2025 10:57 pm

Mon Mar 10, 2025 3:54 pm

I figured out my issue. apparently when I added the two new VSAN vlans i setup i neglected to add them into my OSPF config on switch so the interfaces weren't reachable from the PVE servers. Once I updated it looks like its gonna work now. I just need to get rid of the old target address.
yaroslav (staff)
Staff
Posts: 3579
Joined: Mon Nov 18, 2019 11:11 am

Mon Mar 10, 2025 4:05 pm

Hi,

I am very glad to read that you've fixed it:)
Good luck with your project.
vertigomike
Posts: 7
Joined: Tue Feb 25, 2025 10:57 pm

Mon Jun 16, 2025 8:11 pm

Back to working on this setup in my lab. Performance has been terrible. I ended up enabling jumbo frames on my nexus switch and have updated the QOS settings on the switch (specific model doesn't support per-interface MTU settings) and have changed the MTU on both Starwinds data interfaces to 9000 and my proxmox san interfaces to this network to 9000 MTU as well. However speeds still appear slow. I do see jumbo frames now on the switch but I kicked off the clone of one of the VMs that the template is on the Starwinds storage. VM is 300GB. So far in 2hr 48m its managed to clone 210GB of the image. This is nuts. I can do on local storage usually in 20-25min on average.
yaroslav (staff)
Staff
Posts: 3579
Joined: Mon Nov 18, 2019 11:11 am

Mon Jun 16, 2025 9:00 pm

What does the storage performance look like on the storage servers?
vertigomike
Posts: 7
Joined: Tue Feb 25, 2025 10:57 pm

Wed Jun 18, 2025 3:40 pm

Looking in the dashboard it shows IOPS of 5.4 Read 0.2 write. Disk throughput 889KB Read 782KB Write. Network throughput 73Kbit Receive, 8 Mbit Send.
Post Reply