New Setup Advice and Feedback

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Wed Aug 14, 2019 9:29 pm

Hi all

So, am setting up a small VMware environment for internal services and lab work. Having looked at VMware vSAN and Starwind, it appears for a 2 node cluster, Starwind makes most sense at this stage. Working through design on paper, i think i am on the right track, but want input from those of you with more experience in the product than I :)

We have two Supermicro hosts with 2 1GB NICs and 2 10GB NICs, with each node with a single 2TB NVMe drive. Hosts are running ESXi 6.7U2 and vCSA is up and running with hosts added to a cluster.
Now onto Starwinds, if my understanding is correct:
- I will need to setup two Windows 2016 servers on the 2TB NVMe disks and install the Starwinds application on each
- Run a link between each node's 10GB NIC with one NIC in a vSwitch for iSCSI/heartbeat and the second NIC in a vSwitch for Sync. There will be no redudancy on vSwitch NICs in this case
- Configure each vNIC within the Windows Servers running Starwind with IPs within separate networks
- Configure as per the guide / resources / jumbo frames etc
- Setup storage file / iSCSI to present to hosts

Some queries:

- On the hosts i will need to add an iSCSI LUN pointing back to the VMs already running on the 2TB store right? So in essence, the two Starwind hosts cannot be on the 'shared storage'

- For simplicity for our use case, i am thinking of setting up a single 1TB LUN for all VMs, whereas the guides i see online appear to setup multiple or one for each VM. Is one large one an issue?

- Once the LUN is created and presented to the ESXi hosts all other VMs including vCenter can be moved to this store right?

- In regards to VMware DRS and HA - should these be enabled and safe to do so? Ultimately, if my understanding is correct above, the two Starwind Windows Servers can not be vMotioned as they are not on shared storage (at least not without vMotion across local data stores). How is this normally handled?

- If i bring a host down for maintenance, do i move the Starwind Windows Servers across to the other hosts (vMotion) and shift back once out of maintenance mode?

- How are Starwind upgrades handled? Do we move all guests off the hosts running the instance of Starwind being upgraded, run the upgrade, move the hosts across and upgrade the other? If there is a resource for this, apoloiges, i might have missed it in all my late night blurred reading :)

- How do I monitor the Starwind cluster in the free version?


Please let me know if i have any of this incorrect or am missing something. Looking forward to finally trying the product out.

Much appreciated.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Thu Aug 15, 2019 2:34 pm

- On the hosts i will need to add an iSCSI LUN pointing back to the VMs already running on the 2TB store right? So in essence, the two Starwind hosts cannot be on the 'shared storage'

- For simplicity for our use case, i am thinking of setting up a single 1TB LUN for all VMs, whereas the guides i see online appear to setup multiple or one for each VM. Is one large one an issue?

* It shouldn't be. That is what I am doing. A single 1TB datastore.

- Once the LUN is created and presented to the ESXi hosts all other VMs including vCenter can be moved to this store right?

* Correct.

- In regards to VMware DRS and HA - should these be enabled and safe to do so? Ultimately, if my understanding is correct above, the two Starwind Windows Servers can not be vMotioned as they are not on shared storage (at least not without vMotion across local data stores). How is this normally handled?

* I am doing this with no issues. You are correct - the storage appliances need to be on local storage on each host.

- If i bring a host down for maintenance, do i move the Starwind Windows Servers across to the other hosts (vMotion) and shift back once out of maintenance mode?

* No. Because they need to be on local storage, you can't really move them.

- How are Starwind upgrades handled? Do we move all guests off the hosts running the instance of Starwind being upgraded, run the upgrade, move the hosts across and upgrade the other? If there is a resource for this, apoloiges, i might have missed it in all my late night blurred reading :)

* If you are upgrading the SW on a starwind VSA, just do the upgrade and reboot. The still-running VSA should handle any requests. For a vsphere maintenance window: shut down the VSA, put the host in maintenance mode, do the upgrade, reboot, remove from maintenance mode, and boot the VSA.

- How do I monitor the Starwind cluster in the free version?

* I haven't really dealt with this yet, since I have a home lab. You could probably run some kind of nagios script to check iSCSI on each VSA?

p.s. make sure you install and use the powershell script starwind provides on each windows VSA to make sure each host rescans the iSCSI HBAs and etc on various events.
frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Thu Aug 15, 2019 9:18 pm

Awesome, thanks for the replies and insight!!

I'll keep testing it today, but all looks very positive so far!
Michael (staff)
Staff
Posts: 317
Joined: Thu Jul 21, 2016 10:16 am

Fri Aug 16, 2019 6:32 pm

Thank you for your interest in StarWind product.
You can follow this guide to build the cluster: https://www.starwindsoftware.com/resour ... sphere-6-5
Also, please check StarWind Knowledge Base for some useful articles (restart, update, settings, etc): https://knowledgebase.starwindsoftware.com/
It is possible to configure the mail notifications from StarWind service for the HA or other events.
Just open Configuration tab for each StarWind server, go to Event Notifications and add Send E-Mail Action there. This configuration should be done on all hosts.
Please keep us updated about results.
frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Fri Aug 16, 2019 9:31 pm

Hi Michael

Thanks for taking the time to respond.
I got up and running with a POC yesterday and used that guide. All in all, very straight forward.

Regarding the events and notifications, as I am on the free version for now, I assume the configuration tab option you mention is not possible. Can this be configured with PowerShell?

Also, slightly off topic and unrelated to Starwind directly, my hosts 10Gb NICs are cabled directly (no switch), with Sync and iSCSI vSwitches each having a VMXNET vNIC assigned to the Starwind hosts. Running iperf tests between hosts over the 10Gb link I note in Windows Server 2016 (fresh install, updated and VMtools installed) I am capped at approx 6GB however the same test in Linux sees results of almost 10Gb. Running the same tests with guests on a single host (traffic only traversing the vSwitch and not physical links) I see Windows with the same result and Linux with approx 15Gb. I ran iperf on the Windows Server 2016 host targeting localhost and iperf server in another console window and saw a cap of approx. 6Gb again.

This makes me think it's an iperf in Windows issue but thought I'd ask here given others are likely to have run similar tests in their deployment testing.


Thanks
batiati
Posts: 29
Joined: Mon Apr 08, 2019 1:16 pm
Location: Brazil
Contact:

Sat Aug 17, 2019 4:27 pm

I note in Windows Server 2016 (fresh install, updated and VMtools installed) I am capped at approx 6GB however the same test in Linux sees results of almost 10Gb
Try to increase the number of threads on iperf using the -P parameter, you should achieve a better result on windows.

Code: Select all

.\iperf3.exe -c 10.10.1.1 -P 4
frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Sun Aug 18, 2019 11:19 am

batiati wrote:Try to increase the number of threads on iperf using the -P parameter, you should achieve a better result on windows.
Yep, that definitely showed a difference with a total of ~9.5GB/sec with 4 threads.

I now have a working setup in testing, with two hosts, direct network cabling between 10GB ports for the iSCSI and Sync (separate port for each). I have two Windows Server 2016 boxes running, with one on each host and Starwind software installed. I created a 500GB eager zero'd virtual disk as a 2nd drive for each of the servers. This drive is where my ~480GB Starwind image lies, created and synced (approx 28mins to complete the sync on creation).

I am now running some tests to compare read/write performance for VMs hosted on the Starwind iSCSI share vs on a windows server 2016 vSAN host. I appear to be seeing quite a bit of difference, but would certainly expect some based on some of the forums posts i am reading. Not sure if what i am seeing is within expectations or if i am missing something and should investigate further as this does seem far slower. I have not enabled caching at this point. Each of the ESXi hosts have a single Intel 2TB NVMe (D1 P4101) disk at this stage. ESXi hosts have also have MPIO enabled for the iSCSI connection with round robin size set to 1. Each of the Windows server 2016 hosts have 4 vCPUs with 2GHz reserved and 8GB RAM. There is nothing else on the iSCSI share consuming resources and the ESXi cluster is not doing much else at the moment.

EDIT:
Just cross checking the 2-node setup documentation, thought i'd provide some more info on the current setup: MTU on ESXi set at 9000, with DiskMaxIOSize at 512. The only thing i haven't done is setup the scheduled task to rescan the data store as i did it manually for now as these are just test boxes. The hosts are SuperServer E200-8D with 64GB RAM in each but are not running anything other than the Starwind related hosts, a test iSCSI hosted client and vCenter.

Using ATTO on the Windows Server 2016 box hosting the Starwind services (test against c:\, not the drive containing the Starwinds image):
vsan_host.png
vsan_host.png (8.63 KiB) Viewed 8793 times
The same test run within a VM with disk hosted on the iSCSI:
iscsi_hosted.png
iscsi_hosted.png (8.8 KiB) Viewed 8793 times
I have also just tested iperf with the Windows Server 2016 Starwinds host as the server and an ESXi host as the client, with 4 threads achieving in excess of 10GB, so it doesn't feel like a network connectivity issue, at least in my simple train of thought, right now.

Is the disk IO output within reasonable expectations? If more info would help with feedback please let me know.

I would love any feedback.
frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Wed Aug 21, 2019 1:41 am

Just providing an update - after some more testing i decided to try the Starwinds for vSphere appliance.
I deployed two instances of appliances, one on each host, and setup each on the same PortGroups as the the windows instances based on Starwind Free.
I created each a 500GB virtual disk and then created the Starwinds image from using the management tools.
The synchronisation of the 500GB disks in this case completed in 8 minutes, as opposed to almost 30 minutes for the windows based installs. I checked and rechecked configuration of the windows hosts and confirm they are using VMXNET vNICs and sync vNICs are correctly selected (IPs defined in powershell script)
Not sure why the variation is the case, but the appliances appear impressive. Overall read/write via iSCSI is faster in my test cases also.

I think i will be sticking to the paid edition in this instance, plus its nice using linux appliances rather than windows boxes :)
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Aug 21, 2019 4:54 pm

Great to learn it worked great with Linux.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Wed Aug 21, 2019 5:00 pm

Boris (staff) wrote:Great to learn it worked great with Linux.
Boris, I'd love to switch to Linux when you get ZFS working. At the moment, though, it's only by an NFS license. A couple of questions:

1. Will the linux appliance be available for free?

2. If not, will it be easy to renew the NFR license? If not, what happens when it expires?

Thanks!
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Aug 21, 2019 5:44 pm

The Linux appliance is available for free. It's only the license and support that are paid there. In your case with an NFR license it's not the case so far, as it will be active for 1 year. When it expires, the HA devices will not be able to accept iSCSI connections, so you would need to work on relicensing this a couple of weeks before the license expiration period by contacting the person you received your license from.
danswartz
Posts: 71
Joined: Fri May 03, 2019 7:21 pm

Wed Aug 21, 2019 5:55 pm

Okay, thanks. With the no-license (free) linux appliance, does the web gui still work? Or is there some variation of the powershell that windows appliance uses?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Aug 21, 2019 6:35 pm

Web-based GUI of StarWind VSAN for vSphere is not dependent on the appliance's license and is available no matter whether a free license or a commercial one is used.
frankyyy02
Posts: 10
Joined: Wed Aug 14, 2019 9:14 pm

Wed Aug 21, 2019 9:25 pm

Okay, thanks. With the no-license (free) linux appliance, does the web gui still work? Or is there some variation of the powershell that windows appliance uses?
After setting up the appliance you will need access to this interface to configure password, storage and NICs etc. When you connect the management interface you are asked to apply your licence.
From what i can tell there is no way to manage storage, sync, replication (perform management tasks) via the web UI, its purely for configuring the device upfront. Not sure if there are CLI commands that may perform these functions.
I am using this for work, so paying for it is fine, but I'd love to get the appliance setup for my home lab separately. If a free version was released i'd definitely use it in this case as I didn't have a great time with the Free version in Windows with powershell from a performance perspective in my testing. Maybe just my situation though.

Edit:
The appliance does have a HBA rescan powershell script if i recall correctly. Maybe it does support some Powershell?

On that note - wondering if any of the team at Starwind have attempted to create a new ESXi user with limited privileges to perform the HBA scan, rather than a root user? I couldn't seem to get it working with limited privileges and VMware docs are a little vague on this topic from my research. Has anyone had any luck?
Michael (staff)
Staff
Posts: 317
Joined: Thu Jul 21, 2016 10:16 am

Wed Aug 28, 2019 10:15 am

Hello frankyyy02,
You can find a guide for the Linux appliance configuration here: https://www.starwindsoftware.com/resour ... or-vsphere
As for the separate user creation for rescan, try to create a dedicated role and set the following privileges: Inventory, Configuration, Local operations, CIM, vSphere Replication, Settings.
I hope it helps.
Post Reply