SOP for getting one partner operating when the other fails.

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Thu Jan 02, 2025 8:47 pm

History:
Using the CVT machine with vmware, Initially Created a LUN using the CreateImageFile.ps1 sample.
Using Create HA_2 and AddHAPartner as template and converted the Image file to HA, and allowed it to fully sync.

"Primary" was targetha02
"Partner" is partnerha02
The "HA Device" is HAImage1.

We needed to do some maintanence to Primary hardware, so we used the MaintenanceMode.ps1 to shut down Primary, and then used it again to shut down the partner.

Rebooted both servers -- some drives failed in the primary during the reboot.

Left with just the partner -- with the primary no more -- I'm looking for Standard Operating Procedure on how to get the storage backup.... prior to repairing the HA.

My understanding from looking at some other posts (like forums_starwindsoftware_com viewtopic.php ? t=5738)
is that first you should run RemoveHAPartner, put the new replacement in place and then run AddHAParnter to connect to replacement and allow to sync.

In my case when I boot the remaining partner up, starwind shows the lun is in maintenance mode (which of course it is).
When I try to use MaintenanceMode.ps1, but change the operands to (false, true) to try to take it out of maintenance mode, it fails because "connection with partner node is invalid."
#code: params: enable, force -> $false, $true

When I try RemoveHAPartner, it gives "Error: 200 Failed: operation cannot be completed."

Is there some "Standard Operating Procedure" for if one of the nodes is failed and unrecoverable, and the second node is in maintenance mode, to bring the operational node back out of maintenance mode into "limited availability" mode, or to bring it back up in "image" mode?
Last edited by gfdos.sys on Tue Jan 07, 2025 6:25 pm, edited 1 time in total.
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Fri Jan 03, 2025 9:05 am

Initially Created a LUN using the CreateImageFile.ps1 sample.
It creates a non-replicated image. You cannot make it HA later without significantly reworking the scripts.
RemoveHAPartner, put the new replacement in place and then run AddHAParnter to connect to replacement and allow to sync.
Does the RAID need rebuilding from scratch? In other words, sit the RAID gone?
When I try to use MaintenanceMode.ps1, but change the operands to (true, true) to try to take it out of maintenance mode, it fails because "connection with partner node is invalid."
#code: params: enable, force -> $false, $true
You need both partners to exit the maintenance mode.
You can edit the headers
1. Stop StarWindService on one node
2. Go to C:\Program Files\StarWind Software\StarWind\headers and copy the corresponding *_HA.swdsk
3. Edit that file.
4. aim for <maintenance_mode/> area and set it to false for both local and partner.
5. Start StarWindService and see if the target is synchronized and available over iSCSI (You might still need to mark it as synchronized).
When I try RemoveHAPartner, it gives "Error: 200 Failed: operation cannot be completed."
It won't work until the device is in maintenance mode.

P.s. StarWind VSAN features Active-active replication. See more on priorities (i.e., what you refer to primary/partner) here https://forums.starwindsoftware.com/vie ... php?t=5731
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Fri Jan 03, 2025 2:54 pm

Initially Created a LUN using the CreateImageFile.ps1 sample.
It creates a non-replicated image. You cannot make it HA later without significantly reworking the scripts.
I will come back to how I did this at the end of this post so you can address that -- that seems like a totally different topic from your response.
After the drive was converted to HA it was in sync before the trouble.
RemoveHAPartner, put the new replacement in place and then run AddHAParnter to connect to replacement and allow to sync.
Does the RAID need rebuilding from scratch? In other words, sit the RAID gone?
Primary was Raid 0 - 4 drives, 2 failed -- there is no way to recover the side that failed.
So the premise is we only have one of the two members in maintenance mode, and the other partner is not recoverable.
Further all vms on the drive were powered off before the members were put into maintenance mode, so the content of the remaining member should be complete.
When I try to use MaintenanceMode.ps1, but change the operands to (true, true) to try to take it out of maintenance mode, it fails because "connection with partner node is invalid."
#code: params: enable, force -> $false, $true
You need both partners to exit the maintenance mode.
If I understand you correctly, you are saying that using just the script, taking a HA pair out of maintenance mode requires both to be intact.
And, If both are not available, it is possible to edit the headers on the one that is intact, by first stopping the Starwindservice on the remaining node,
editing the applicable *HA.swdsk, look for <maintenance_mode/> and set it to false for both nodes.

Are you saying? - This would then when I restart the service see "both" as out of maintenance mode, but then quickly realize that the missing one is not available.... and fall back to limited availability?
5. Start StarWindService and see if the target is synchronized and available over iSCSI (You might still need to mark it as synchronized).
How do I "mark" the one remaining copy "as synchronized" as you referred to above in ()
When I try RemoveHAPartner, it gives "Error: 200 Failed: operation cannot be completed."
It won't work until the device is in maintenance mode.
So your saying RemoveHAPartner doesn't work unless the device is in Maintenance mode.
The LUN is showing in the web interface as "Down" and in "Maintenance" mode already.
I was trying to get it out of maintenance mode, so that it would go back to "limited availability" when it can't find the partner.....
My thought was then in that state, to remove the forever lost partner and then add a new HA partner.
Clearly there are some steps I have misunderstood.
P.s. StarWind VSAN features Active-active replication. See more on priorities (i.e., what you refer to primary/partner) here https://forums.starwindsoftware.com/vie ... php?t=5731
I appreciate that article.
In my case with one node completely missing -- and both having been in maintenance mode when the one was lost, I have a unique situation apparently, where I have a "trusted" duplicate of the data (because it was in maintenance mode, and it was not online when the failure occured) -- "active-active" was "maintenance-maintenance", so neither were "active" at the time of the failure.

-=-
As promised, I will say what I did to go from Image to HA.
In my case I had a server that was running out of space.
I had a second server where I could move the data to, but I didn't have the luxury of 2 nodes for my "greenfield"(aka "clean new installation")
I needed to setup Starwind in single node "image mode", then after I did bare metal to vm conversion (P2V) and used the starwind single node "image" mode, then I would be able to make sure I had a good vm backup and wipe the bare metal source and then after adding capacity bring it up as the partner node.

So I used the CreateImage.ps1 but had to modify a few of the script answers because I'm using CVT appliance not the windows based starwind:

Code: Select all

$filePath="mnt/sdb1/vSWsanVolume01",
...
	$targetAlias="targetimg2",
This resulted in "File: mnt/sdb1/vSWsanVolume01\targetimg2.img" according to EnumDevicesTargets.ps1

Once I had everything restored and functioning on the single image based node, I followed the logic in the white papers for the windows version of starwaind --- you create the storage, and then you add the HA... but I did it with script.
I couldn't used AddHApartner.ps1, because that requires an HA pair to already exist, and add an additional node.... I needed to go from the img as the starting point, use it to see the "target"/primary and create a "partner" node that would then Sync from the first node.
Once I ran the script, under LUNs in the startwind web console availability went from "Standalone" to "Limited Availability" until the entire image was synced, then it changed to "High Availability" -- that is how I knew the process had completed successfully.
The beauty and simplicity was that the storage was available for production (except for the temporary outage when I ran the script) for the entire time while it was being replicated.

I will show you the script I used by piecing the script together from things in this forum and the white paper instructions on how to do the CVT deploy.
Please tell me if I did anything wrong -- this seemed to be straight forward script way of following the videos and instructions of how to setup starwind storage on windows --- first you create the storage LUN, then you add the High Availability.

Code: Select all

#CreateHA_2from1Img.ps1
param($addr="192.168.0.1", $port=3261, $user="root", $password="starwind",
	$addr2="192.168.0.2", $port2=$port, $user2=$user, $password2=$password,
#common
    #If useing this to match an existing flat-file image created by CreateImageFile.ps1 - as noted below settings must match the original file created.
	$initMethod="SyncFromFirst", #//[Clear], NotSyncronize (only works if there is no data), SyncFromFirst, SyncFromSecond, SyncFromThird (All "SyncFrom"s run full sync. Use it for (re)creating replicas)
    $size=512000,#//set size for HA-device In MB 1024=1GB //must be same size as created flat-file image
	$sectorSize=512,#//set sector size for HA-Device [512] or 4096 //must be same size as created flat-file image
	$failover=0, #//set type failover strategy [0(Heartbeat)] or 1(Node Majority)
	$bmpType=1,#//set bitmap type [1(RAM)] or 2(Disk)
	$bmpStrategy=0,#//set journal strategy [0] 1(Best Performance (Failure)) 2(Fast Recovery (Continuous))
#primary node
	$imagePath="mnt/sdb1/vSWsanVolume01",#//set path to store the device file [My computer\C\starwind] or VSA Storage/mnt/mount_point
	$imageName="primaryha02",#//set name device [primaryImg21]
	$createImage=$true, #//set create image file [$true] $false //an attempt will be made to create with existing targets -targetAlias (must already be created)
    #$storagename=//is only used if you plan on adding the partner to the existing device 
	$storageName="imagefile2", #//flat-file to convert to HA -- run script enumDevicesTargets and look for the Device: Name (not the actual filename of the flat-file) 
	$targetAlias="targetha02",#//set alias for target [targetha21] #//flat-file to convert $targetAlias="targetimg2",
    $autoSync=$true, # this was missing from the script but in starwind documentation //  Make sure to specify automatic or manual synchronization after device creation by setting the value of this variable to either $true or $false
	$poolName="sdb1", # name of pool created in CVT
	$syncSessionCount=1,
	$aluaOptimized=$true,
	$cacheMode="none",#//set type for L1 cache (optional) none or [wb] or wt
	$cacheSize=0,#//set size for L1 cache in MB (optional) [128]	
    $syncInterface="#p2={0}:3260" -f "172.16.20.102", #//Synchronization interfaces. Enter the IP address(s) of the partner node interface(s) (the “second” StarWind node) which will be used as the synchronization channel
	$hbInterface="#p2=172.16.10.102:3260", #//Heartbeat interfaces. Enter the IP address(s) of the partner interface(s) (the “second” StarWind node) which will be used as the heartbeat channel;
    #NOTE: It is recommended to configure the Heartbeat and iSCSI channels on the same interfaces to avoid the split-brain issue. 
    #If the Syncronization and Heartbeat interfaces are located on the same network adapter,
    #it is recommended to assign one more Heartbeat interface to a separate adapter. -- I dont have as sample of what that code would look like in script yet tho
	$createTarget=$true,
	$bmpFolderPath="",
#secondary node
	$imagePath2="mnt/sdb1/vSWsanVolume01",#//set path to store the device file [My computer\C\starwind] or VSA Storage/mnt/mount_point
	$imageName2="partnerha02",#//set name device [partnerImg22]
	$createImage2=$true, #//set create image file [$true] $false //an attempt will be made to create with existing targets -targetAlias (must already be created)
    $storagename2="",#//only used if you plan on adding the partner to the existing device
	$targetAlias2="partnerha02",#//set alias for target [partnerha22]
    $autoSync2=$true, # this was missing from the script but in starwind documentation
	$poolName2="sdb1",
	$syncSessionCount2=1,
	$aluaOptimized2=$false, # [$false] This was set false in the script but in other posts on forum and documentation it said it should be true on both nodes?
	$cacheMode2=$cacheMode,
	$cacheSize2=$cacheSize,
	$syncInterface2="#p2={0}:3260" -f "172.16.20.100",
	$hbInterface2="#p2=172.16.10.100:3260",
	$createTarget2=$true,
	$bmpFolderPath2=""
    )
...
output from enumDevicesTargets.ps1 before I ran the script to make them into HA:

Code: Select all

Target:
Name                     : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetimg2
Id                       : 0x00000000011952C0
Alias                    : targetimg2
IsClustered              : True
Devices                  : imagefile2

Device:
Name                     : imagefile2
Id                       : 0x0000000001194E80
TargetName               : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetimg2
TargetId                 : 0x00000000011952C0
DeviceType               : Image file
Size                     : 536870912000
File                     : mnt/sdb1/vSWsanVolume01\img2.img
output from enumDevicesTargets.ps1 after I ran the script above CreateHA_2from1Img.ps1 to make them into HA:

Code: Select all

Target:
Name                     : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetha02
Id                       : 0x0000000001274F00
Alias                    : targetha02
IsClustered              : True
Devices                  : HAImage1

Device:
Name                     : imagefile2
Id                       : 0x0000000001194E80
TargetName               : empty
TargetId                 : empty
DeviceType               : Image file
Size                     : 536870912000
File                     : mnt/sdb1/vSWsanVolume01\img2.img

Device:
Name                     : HAImage1
Id                       : 0x0000000001275640
TargetName               : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetha02
TargetId                 : 0x0000000001274F00
DeviceType               : HA Image
Size                     : 536870912000
NodeType                 : 1
IsSnapshotsSupported     : 0
SectorSize               : 512
Priority                 : 0
SyncStatus               : 1
SyncTrafficShare         : 50
DeviceLUN                : 0
Header                   : Headers\primaryha02\primaryha02_HA.swdsk
IsWaitingAutosync        : False
FailoverStrategy         : 0
MaintenanceMode          : 0
BitmapStrategy           : 0

Partner:
TargetName               : iqn.2008-08.com.starwindsoftware:192.168.0.2-partnerha02
Priority                 : 1
Type                     : 1
SyncStatus               : 2
BitmapType               : 1
SyncChannels             : 172.16.20.102
HearthbeatChannels       : 172.16.10.102

Target:
Name                     : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetimg2
Id                       : 0x00000000011952C0
Alias                    : targetimg2
IsClustered              : True
Devices                  : 
All of those enumDevicesTargets.ps1 outputs above were from the primary......
The output I'm seeing from enumDevicesTargets.ps1 from the secondary shows a few differences...
I noted that under the "Device:" section for the imagefile, that it shows a line "Parent: HAImage1" that doesn't show in the output from the primary above, also I noticed that in the "Target:" section on the primary (above) "Devices:" line is blank... where on the partner the "Target:" section "Devices:" shows the name of the device HAImage1.
I see the priority differences you mentioned in your previous response... so that makes a bit more sense to me now... Interesting that the "Device:" section for the HAImage1 on the partner (below) pints to the local partner HA.swdsk with the lower priority.

current output from enumDevicesTargets.ps1 on the partner (even though it is stuck in maintenance mode, and down):

Code: Select all

Target:
Name                     : iqn.2008-08.com.starwindsoftware:192.168.0.2-partnerha02
Id                       : 0x0000000001186F00
Alias                    : partnerha02
IsClustered              : True
Devices                  : HAImage1

Device:
Name                     : imagefile1
Id                       : 0x0000000001181840
TargetName               : empty
TargetId                 : empty
DeviceType               : Image file
Size                     : 536870912000
Parent                   : HAImage1
File                     : mnt/sdb1/vSWsanVolume01\partnerha02.img

Device:
Name                     : HAImage1
Id                       : 0x000000000118A880
TargetName               : iqn.2008-08.com.starwindsoftware:192.168.0.2-partnerha02
TargetId                 : 0x0000000001186F00
DeviceType               : HA Image
Size                     : 536870912000
NodeType                 : 1
IsSnapshotsSupported     : 0
SectorSize               : 512
Priority                 : 1
SyncStatus               : 3
SyncTrafficShare         : 50
DeviceLUN                : 0
Header                   : Headers\partnerha02\partnerha02_HA.swdsk
IsWaitingAutosync        : False
FailoverStrategy         : 0
MaintenanceMode          : 1
BitmapStrategy           : 0

Partner:
TargetName               : iqn.2008-08.com.starwindsoftware:192.168.0.1-targetha02
Priority                 : 0
Type                     : 1
SyncStatus               : 3
BitmapType               : 1
SyncChannels             : 172.16.20.100
HearthbeatChannels       : 172.16.10.100
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Sat Jan 04, 2025 9:02 pm

After the drive was converted to HA it was in sync before the trouble.
Nice to read that.
Primary was Raid 0 - 4 drives, 2 failed -- there is no way to recover the side that failed.
RAID0 is not supported due to no redundancy. See https://knowledgebase.starwindsoftware. ... ssd-disks/ for more details.
So the premise is we only have one of the two members in maintenance mode, and the other partner is not recoverable.
Thanks for more details. Stick with the steps I provided in my previous post.
Are you saying? - This would then when I restart the service see "both" as out of maintenance mode, but then quickly realize that the missing one is not available.... and fall back to limited availability?
It is expected to be limited availability as it cannot find the replication partner.
How do I "mark" the one remaining copy "as synchronized" as you referred to above in ()
This might be not needed. First, check if Starwind HA devices are accessible over iSCSI.
If they are not accessible, install the Windows-based Management Console (Should be available for download in the gear-shaped menu). See if the device is synchornized (it might complain about the partner).
You can paste the screenshot here.
If you need marking the device as synchronized, you need to set x to 1 in <sync_status>x</sync_status> in the *_HA.swdsk . To edit the file, follow the same approach as outlined in my previous post.
I was trying to get it out of maintenance mode, so that it would go back to "limited availability" when it can't find the partner.....
Limited availability is expected.
You need to exit maintenance mode and then you can remove the replication partner.

Let me know if you have more questions.
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Mon Jan 06, 2025 8:46 pm

As I mentioned earlier, I'm using the CVT not the windows version so I found the files you recommended but I may need some clarifications on how to do the process from linux.
You need both partners to exit the maintenance mode.
You can edit the headers
1. Stop StarWindService on one node
In the CVT I didn't see one StarWindService. There are 3 pages of services.....
The "Virtual SAN Service" cannot be stopped from the web interface, if you try when you go to the stop button it says "The system service is not allowed to stop"

Which service(s) do I need to stop on the CVT version and do they need to be stopped from the command line or can I stop them from the Web interface?
( I tried stopping all the services it would let me that didn't have to do with TUI console or Nginx thinking that this would prevent me from administrating .... but not sure if there was something I was missing)
2. Go to C:\Program Files\StarWind Software\StarWind\headers and copy the corresponding *_HA.swdsk
3. Edit that file.
4. aim for <maintenance_mode/> area and set it to false for both local and partner.
I found this file in /opt/starwind/starwind-virtual-san/drive_c/starwind/headers/partnerha02
I tried editing this using sudo nano with the services it would let me stop stopped.... the <maintenance mode /> was in the node id="1" section, which seemed to refer to the primary node. I set it to false, and after saving rebooted the node from the web interface.... not sure if I had stopped the right services.
5. Start StarWindService and see if the target is synchronized and available over iSCSI (You might still need to mark it as synchronized).
As I mentioned above since I was not sure if I had stopped the right service or if that was even possible from the cvt web interface, I rebooted the host rather than restarting services.
The server came back up and the LUNs section in the web interface shows the LUN is still in maintenance mode.

I did see <sync_status> under both the node id="1" and node id="2" (name= calling out the partner node). sync status was 1 for the missing node, and 0 for the partner node
sync status.PNG
sync status.PNG (47.42 KiB) Viewed 1263 times
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Mon Jan 06, 2025 10:52 pm

In CVM, the service is called starwind-virtual-san.service, you need sudo privileges to edit the files and stop the service.
No need to change the sync status, it should be good.
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Tue Jan 07, 2025 6:19 pm

In CVM, the service is called starwind-virtual-san.service, you need sudo privileges to edit the files and stop the service.
No need to change the sync status, it should be good.
For anyone else who has to follow these steps, I'll be complete in my documentation:

I was able to use the command line from the CVT:

Code: Select all

sudo systemctl stop starwind-virtual-san.service
cd /opt/starwind/starwind-virtual-san/drive_c/starwind/headers/partnerha02
sudo nano partnerha02_HA.swdsk
change <Maintenance_Mode/> to false and then save by ctrl+x

Code: Select all

sudo systemctl start starwind-virtual-san.service
In the web interface the LUN is now showing Limited availability.
In ESXi if I add it to Static Discovery, then Dynamic discovery can see the target and has the right target name, but under paths and devices its still not finding it. (even rescanning storage)

I tried the above process again to change the <sync_status> from 0 to 1 for the partner. After restarting the service it had no effect. (even rescanning storage)

so here is the screenshot from the windows management console at this point:
vsanNotSyncronized.png
vsanNotSyncronized.png (44.31 KiB) Viewed 1211 times
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Tue Jan 07, 2025 10:50 pm

Please share what the *_HA.swdsk looks like.
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Wed Jan 08, 2025 6:08 pm

_HAswdska.PNG
_HAswdska.PNG (83.36 KiB) Viewed 1072 times
_HAswdskb.PNG
_HAswdskb.PNG (75.34 KiB) Viewed 1072 times
_HAswdskc.PNG
_HAswdskc.PNG (18.99 KiB) Viewed 1072 times
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 09, 2025 12:16 am

Can you please also check what the header looks like on the /sdb/* path?
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Thu Jan 09, 2025 4:37 pm

I don't think I understand what you are asking for....

in side /opt/starwind/starwind-virtual-san/drive_c/starwind/headers there is only one folder partnerha02.
in that folder is only partnerha02.swdsk and partnerha02_HA.swdsk

if you mean in /mnt/sdb1 there is only one folder vSWsanVolume01.... and in there there are some *.swdsk.bak files....
partnerha02.img partnerha02.swdsk.bak partnerha02_HA.swdsk.bak starwind-volume-config.json

The partnerha02_HA.swdsk.bak looks identical to the images I sent from partnerha02_HA.swdsk
with only the following differences:
<node id="1" name="HAImage" shut="false" active="true" flags="0">
shut is false in the .bak instead of true in the screen shots I sent

maintenance mode is true in the .bak file instead of false in the screen shots I sent

If you ment something other than the .swdsk.bak files when you said "header on the /sdb/* path" please explain what you are looking for so I can get it for you to look at.
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 09, 2025 5:07 pm

Please set maintenance mode to false for the *.bak file too.
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Fri Jan 10, 2025 2:02 pm

I repeated the process for the .bak file
I stopped the service
I changed the .bak file maintenance mode false
I started the service

I tried to rescan storage still nothing.

just to verify the networking (which hasn't changed)
from the CVT I pinged the ESXi host, and one of the other ESXi hosts... and they respond (so Management nic is working)
I pinged the data and replica VMKernel ips and they responded. (so Data and heartbeat nics are working)

If I remove the dynamic and static ips, and then disable the software iscsi on vmware, then reboot to reset it, then renable the software iscsi and tell it the static ip, it finds the dynamic discovery, and shows the right iscsi target. But then rescanning storage doesn't find the path or the device.

Checking in the CVT it does show the LUN with limited availability and online, but with no connections. It does show the same iscsi target name that dynamic discovery is finding.

let me just double check if the whole data path has jumbo frames to be sure....(It hasn't changed) verified -- although if that wasn't right I imagine the ping would have failed

There is no heartbeat of course because its only one node in limited availability

I also checked under appliances -- it of course only shows one of the notes is online
and under network Configure HA networking is greyed out because only one of the nodes is online

Not sure what else to try here.
yaroslav (staff)
Staff
Posts: 3175
Joined: Mon Nov 18, 2019 11:11 am

Fri Jan 10, 2025 3:28 pm

Please share the screenshot from the Windows-based StarWind Management console. Try running SyncHaDeviceAdvanced from the affected node. It might mark the healthy node as synchronized (see the comment in the script).
gfdos.sys
Posts: 9
Joined: Thu Jan 02, 2025 3:56 pm

Tue Jan 14, 2025 10:44 pm

Attaching the response from SyncHaDeviceAdvanced:
SyncHADevAdv.PNG
SyncHADevAdv.PNG (30.67 KiB) Viewed 207 times
I have very few screens to show in the Windows based starwind console so I hope this is what you were looking for:
HAimg.png
HAimg.png (45.92 KiB) Viewed 207 times
Partnerha.png
Partnerha.png (28.39 KiB) Viewed 207 times
Post Reply