Guide for vsan free?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
joels
Posts: 7
Joined: Wed Jul 26, 2023 7:44 pm

Thu Jul 27, 2023 12:01 am

Is there a more complete guide for vsan free? I've followed this guide: https://www.starwindsoftware.com/resour ... rver-2016/ and used the guide at https://www.starwindsoftware.com/techni ... n-free.pdf to complete the device creation but it seems this might be incomplete?
For example the connection steps show how to connect to the created iscsi targets including a witness target. Neither guide mentions creating a witness device so I'm not sure if the script CreateHA_2.ps1 was supposed to create a witness or if I was supposed to run it again. I successfully ran the CreateHA_2.ps1 script to create a new device but it did not create a witness. Is this normal? How do I create a 2 node vsan and run a Hyper-V cluster on it?
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jul 27, 2023 12:14 am

Hi,

Please check the first guide. It discusses how to connect the storage. Please use the CreateHA_2.ps1 to create one device at a time. In other words, create a CSV and Witness running script for each of them.
I'd like to notice though that the witness (i.e., quorum disk) is a Failover cluster entity not StarWind VSAN's one (it is just a disk from StarWind's standpoint). You are welcome to use a different Cluster Quorum settings.

The frist guide you shared covers the complete process of device creation and basic steps to connect the storage; the only difference is that you need to use the script to creat devices. Please see this thread for more details https://viewtopic.php?f=5&t=6159&hilit=tip+3&start=15
Also, please make sure that your system is configured according to best practices. https://www.starwindsoftware.com/best-p ... practices/ and https://www.starwindsoftware.com/system-requirements
joels
Posts: 7
Joined: Wed Jul 26, 2023 7:44 pm

Thu Jul 27, 2023 8:19 pm

Ok, I wasn't sure because most of your examples show an already created witness disk so it looked like that should have been created by the script. I'll create is separately.

I have the device created and iscsi targets connected but I'm having trouble setting the recommended MPIO settings. I'm getting "The parameter is incorrect. The least queue depth policy compensates for uneven loads by distributing proportionately more I/O requests to lightly loaded processing paths."
I am attempting to set this on the local target. I get a similar error when I attempt to use the failover only policy with the message just giving me a similar description of the policy.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jul 27, 2023 10:01 pm

Make sure MPIO is installed and configured on both hosts. Make sure to check the checkbox "enable Multipathing" while connecting the target.
joels
Posts: 7
Joined: Wed Jul 26, 2023 7:44 pm

Fri Jul 28, 2023 2:26 am

I have MPIO installed on both servers. Followed directions for adding support for iscsi devices on both servers. And I selected enabled multipath when connecting to each target. Both servers are running Server 2022 and were created this morning for the only purpose of testing Starwind vsan so they haven't had anything done to them other than what is in the 2 node hyper-v guide.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Fri Jul 28, 2023 9:09 am

While connecting the target over iSCSI, please check the checkbox Enable Multipathing.
Are you connecting targets in GUI or PowerShell? For the latter, make sure you are using the right command and set the parameter correctly https://support.purestorage.com/Solutio ... PIO_Policy. You can also try a differnt MPIO policy.
Last but not least, I'd like to mention that it does not look like a VSAN issue.
joels
Posts: 7
Joined: Wed Jul 26, 2023 7:44 pm

Fri Jul 28, 2023 5:34 pm

I think the MPIO policy issue is a Windows bug. I've been able to reproduce this on 3 separate lab setups and I'm seeing many threads on the internet with the same issue when setting the MPIO policy from the MPIO tab on the iscsi target but also that setting the policy in disk management does work. So I've set the MPIO policy from disk management to least queue depth on both servers.
So now I have the targets connected and the proper MPIO policy set and I'm trying to test synchronization before I move on to building a cluster using the shared storage. I used disk management to activate the disk and format and add a drive letter. I refreshed disk management on the other server and the formatted volume automatically appeared. I added a test file on the new volume hoping it would appear on the same volume on the other server but it did not. I waited several minutes but there does not seem to be any synchronization happening. The Starwind console shows the device is healthy and synchronized. If I reboot the second server then the file is synchronized over at some point during the startup. Is this a good test of the synchronization process before I move on to the next step?
joels
Posts: 7
Joined: Wed Jul 26, 2023 7:44 pm

Fri Jul 28, 2023 10:13 pm

Any comment on why I am not seeing any files synchronizing when I look at it from both servers? Is this a valid way to test that synchronization is working?
I disconnected and reconnected all iscsi targets via the gui making sure to select enable multipath each time.
The starwind console shows the storage is working properly under health status and synchronization status is synchronized.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Fri Jul 28, 2023 10:22 pm

Fast synchronization happens when one of the servers is restarted or has its service stopped as the service on that side does not accept the I/O. This is normal behavior.
I added a test file on the new volume hoping it would appear on the same volume on the other server but it did not.
Any comment on why I am not seeing any files synchronizing when I look at it from both servers?
This is totally expected behavior as you are using NTFS volume. NTFS is not cluster-aware. See more on it in StarWind FAQ https://www.starwindsoftware.com/starwind-faq.
Is this a valid way to test that synchronization is working?
Please complete the configuration according to the guide, spin up a test VM on the cluster storage, and stop the StarWindSerivce on one node. If VM works and reachable, synchronization is working fine
Is this a good test of the synchronization process before I move on to the next step?
According to the guide, adding storage to the cluster is the next step after the storage is replicated and connected.
Post Reply