StarWind Virtual SAN iSER option greyed out

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Mon Sep 11, 2023 8:18 pm

Hello everyone,

I deployed a 2-node StarWind vSAN HA cluster as a CVM in Windows Server 2022 Hyper-V using a trial license.

The nodes are using dual-port Mellanox ConnectX-5 cards with one NIC for Sync and one for iSCSI/Heartbeat. The nodes also have Intel X550-T2 cards, which are being used for management. The IP configuration is as follows:

Management:
Node 1: 192.168.0.14 Node 2: 192.168.0.15
Sync:
Node 1: 192.168.1.14 Node 2: 192.168.1.15
iSCSI/Heartbeat:
Node 1: 192.168.2.14 Node 2: 192.168.2.15

RDMA has been enabled on the Mellanox ConnectX-5 physical NICs and associated Hyper-V virtual switches.

However, when attempting to activate iSER in the StarWind Management Console, the option is greyed out.

Please see the images below for reference:

Image
Image
Image
Image
Image
Image

Any assistance would be highly appreciated. Thank you for your time.
Last edited by iceet on Fri Sep 15, 2023 7:07 am, edited 1 time in total.
FlashMe
Posts: 14
Joined: Wed Aug 24, 2022 8:23 pm

Tue Sep 12, 2023 2:27 pm

Hey,

as far as i know iSER is not supported on Virtual Network Adapter like Hyper-V Network Adapter or VMXNET3 on VMware.

Why dont you go the way and install StarWind VSAN on both Hyper-V Server locally ? Then you can use iSER on SYNC Links and additional loopback connections for CSV. Be aware iSER will not work on Microsoft iSCSI Initiator for Front End connection on Microsoft.

If you want to use full iSER stack you need a hypervisor which can connect to a virtual (or local) windows machine which gets the mellanox cards via PCIe Passthrough. Dont forget to install the mellanox network driver on this machine.

Then iSER will not be greyed out and you can use it for FrontEnd And SYNC Connection.
FlashMe
Posts: 14
Joined: Wed Aug 24, 2022 8:23 pm

Tue Sep 12, 2023 2:31 pm

@iceet: You can write me an PN if you need help to implement a hyper-v Cluster with starwind vsan. Or feel free to contact the awesome presales Team ;-)
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Tue Sep 12, 2023 8:34 pm

Dear FlashMe,

Thanks for the clarification. I will definitely consider installing StarWind vSAN locally. Even without iSER, the speed is sufficient and CPU utilization isn't problematic. Having iSER wouid be icing on the cake, so to speak :D

Thank you again for your time, it is much appreciated.
FlashMe
Posts: 14
Joined: Wed Aug 24, 2022 8:23 pm

Wed Sep 13, 2023 9:05 am

No Problem. I built many StarWind VSAN Installations before so i can give you some tips for best performance.

What is the Design of both Hyper-V Nodes ? Which Vendor do you use ? Hardware RAID or Software RAID like Storage Spaces ? What disk types do you use ?

Before you install StarWind VSAN check out the Best practice guides on website. Disable TCP Autotuning on both windows boxes. use Iperf fpr network testing. Measure RAID Performance with Diskspd before you start with StarWind VSAN.

At the moment im building a 2 node HCI Cluster for one of my customers. :-)
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Thu Sep 14, 2023 6:49 am

I'm actually running the nodes on custom built PCs. They are using the RAID option found in Asus motherboards (Intel Rapid Storage Technology). I believe it is a logical RAID, in between software and hardware. Each node has 2 2TB NVME drives mirrored in RAID1 for cluster storage.

I was able to install vSAN locally and enable iSER:
Image

Unfortunately, I was unable to measure speeds prior to installing vSAN. Although not iperf, these are the speeds I got with file transfers in Windows:

Write:
Image

Read:
Image

What are your thoughts on the speeds? Perhaps running vSAN as a VM in Hyper-V is faster.
yaroslav (staff)
Staff
Posts: 2362
Joined: Mon Nov 18, 2019 11:11 am

Thu Sep 14, 2023 7:31 am

File copying is a bad performance benchmark as it involves OS buffering (which often does not improve performance at all).
It is important to understand how everything is configured, how many iSCSI sessions you have, and, what is more important what is the underlying storage performance.
VSAN as a VM will not be faster.
Please see how to measure performance in Windows https://www.starwindsoftware.com/best-p ... practices/.
Please also use 3 loopback and 2 partner connections.
Also, speaking of Software RAID, only MDADM and Storage Spaces are supported (https://www.starwindsoftware.com/system-requirements). The system still might work with other software RAID, yet it might be too flaky under pressure because of RAID.

Cheers :)
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Thu Sep 14, 2023 7:38 pm

Dear yaroslav,

Thank you for the beneficial information. It is much appreciated.

The following is information regarding StarWind vSAN as a Linux-based CVM vs. Windows executable in Microsoft Hyper-V:
There are two ways to deploy StarWind Virtual SAN for Microsoft Hyper-V:

As a Linux-based Controller Virtual Machine (CVM) and as a Windows executable. CVM approach is actually preferred because it has advanced features like ZFS, NVMe-oF support, web UI management, AI-powered telemetry and so on, all unavailable in Windows version. CVM has better performance due to the fact Linux has faster storage and network stacks and our proprietary version of it uses polling to avoid interrupts handling and context switches, which is good for reduced CPU usage and lower I/O latency. By using PCI pass-through technology, Controller Virtual Machine is granted direct access to the physical devices, so no virtualization overhead. Windows version being fully supported in production is a bit easier to install and evaluate, though... We encourage you to use Linux-based CVM during evaluation and production, unless absolutely necessary in your particular case!
For our situation the advantages of each method are as follows:

Windows executable: iSER support
Linux-based CVM: faster storage and network stacks, polling to avoid interrupts handling and context switches

We have the option to deploy StarWind vSAN in Hyper-V as a CVM or as an executable. Based on the preceding information, which would you recommend as the final method? Once finalized, further optimization of the hyper-converged infrastructure can be assessed.

Thank you again for your time.
yaroslav (staff)
Staff
Posts: 2362
Joined: Mon Nov 18, 2019 11:11 am

Thu Sep 14, 2023 11:33 pm

Linux-based CVM: faster storage and network stacks, polling to avoid interrupts handling and context switches
IMO (disclaimer): Deploying CVM in Windows will need setting up a VM, which adds more complexity to the system. SR-IOV in Windows might be tricky for some adapters (e.g., I had some "fun" with tuning Intel x710s the other day). Pass-through can also be tricky as you might end up with just a VMDK if the controller/disk cannot be passed through to the VM (VMDK eats some of the performance).
On the other hand, I am quite glad to see features available only for StarWind VSAN for vSphere on Windows.
Which would you recommend as the final method?
There is no "one thing to rule them all", unfortunately. If one is after the benefits of Linux VM in Windows, CVM is an excellent way to go. If one is building "just HA storage", installing an app makes more sense in Windows bare-metal installations simply because it is much easier and minimalistic.

The best will be testing and having a talk with one of our techs who will help fit the system for its future use.
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Fri Sep 15, 2023 7:42 am

Thank you yaroslav. Those things are important to consider when deciding between vSAN as a CVM or executable in Microsoft Hyper-V.

Based on advice from yourself and user FlashMe, I went back and tested RAID1 without StarWind vSAN using Intel Rapid Storage Technology and storage in Microsoft Windows Server 2022. As the saying goes, if you don't have time to do it right, when will you have time to do it twice?

Below are the results:

Non-RAID:
Image
Image

Intel Rapid Storage Technology:
Image
Image
Image

Microsoft Windows Server 2022 storage:
Image
Image
Image

The results appear inconclusive. Intel RST seems significantly faster in DiskSpd, while Windows Server 2022 storage appears faster during Windows file transfers. With CrystalDiskMark, Intel RST is faster in the "SEQ1M Q1T1" write test and the "RND4K Q32T1" read and write tests. Windows Server 2022 storage is quicker during the "SEQ1M Q8T1" read test.

Any thoughts or comments are welcome.
Last edited by iceet on Fri Sep 15, 2023 7:01 pm, edited 6 times in total.
yaroslav (staff)
Staff
Posts: 2362
Joined: Mon Nov 18, 2019 11:11 am

Fri Sep 15, 2023 7:58 am

4 threads may be not enough to saturate disk performance.
Furthermore, iSCSI might be a limitation by itself in this setup.
If you are using a mirror, you can go with individual disks. Sure, no redundancy within the box, but the resulting solution should throw fewer issues as the storage is neither OS- nor CPU-dependent.
iceet
Posts: 6
Joined: Mon Sep 11, 2023 7:39 pm

Sat Sep 16, 2023 5:52 am

Dear friends,

After deploying my vSAN cluster as a Hyper-V CVM, these are the results of a Windows file transfer test:
Write:
Image
Read:
Image

These are prior results obtained running the cluster as a Windows executable:
Write:
Image
Read:
Image

Although Windows file transfers may not be the most accurate means of testing speed, they are arguably indicative of real world performance. Regardless, for Hyper-V, it appears that StarWind vSAN is faster as a CVM than an executable.

Thank all of you for your time and support, and to the StarWind team for providing and maintaining Virtual SAN.
yaroslav (staff)
Staff
Posts: 2362
Joined: Mon Nov 18, 2019 11:11 am

Sat Sep 16, 2023 9:02 am

Hi,

Thanks for sharing your experience.
Post Reply