2 nodes hyper-v windows 2019 standard

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Mon Oct 07, 2024 8:33 pm

Please disable VMQ and SRIOV and try again.
Pinging is a very high-level test. StarWind VSAN is a very latency-sensitive application. Everything that is above 5 ms causes it to drop synchronization.
Are these HPE servers? If so, please disable Smart Caching.
Try a different link for synchronization for a test (you can try creating a smaller device).
Could you please also share the support bundles for both nodes and write the approximate time stamp for the test?
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Mon Oct 07, 2024 8:45 pm

ok - VMQ - switched off and currently looks like it works :O
yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Mon Oct 07, 2024 10:18 pm

Great news :)
Thanks for your update.
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Mon Oct 07, 2024 11:32 pm

ok - next question :D haha
i did some malfunction test - can I check someway progress of Lun resync ? on gui is just info - but question is about real progress ?
of course i see iops/disk/network throughput but as mentioned more interesting is %/time to finish ...
i can calculate it ....6TB is 3hours + ... over single 10Gbit ...with mtu 1500
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Tue Oct 08, 2024 6:30 am

this even works ....not bad
cool
thx
yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Tue Oct 08, 2024 6:53 am

Yes, the GUI roughly shows synchronization duration.
Synchronization speed is not only about the network but also storage and iSCSI performance. The latter is a glass ceiling of some sort for fast storage (that's why everyone is waiting for NVMe-oF implementation).
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Wed Oct 09, 2024 8:33 am

another question
i did total shutdown that way :
stop clustered machine
stop cluster
(now quorum disk and csv volume also clustered machine was managed by 1st hyper-v node = hv-1)
stop Starwind VM on second node (after checking there is no more data flow as mirroring )
stop Starwind VM on primary node ...
shutdown hyper-v secondary node then primary ...
everything powered back in reversed order = and now have full resync of CSV .... how to avoid this ?
how to make it properly for transport to customer site and not have full resync again ?
I think this order of shutting down is proper or im wrong ?

starwind vm's was powered off over console - so F12 shutdown yes ....and started manually (last powered off - first started ) after starting both hyper-v
is any recipe how to do that to avoid full resync ??? ;( - i'm sure there was no more traffic between and cluster was switched of as service/instance before
yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Wed Oct 09, 2024 9:09 am

Greetings,

No. You need to enable StarWind Maintenance mode right after you shut down the VMs. See more on Maintenance mode https://www.starwindsoftware.com/help/M ... eMode.html and the restart and production shutdown procedures https://knowledgebase.starwindsoftware. ... installed/
See the reasons for full synchronization to start https://knowledgebase.starwindsoftware. ... may-start/

p.s. Both nodes are kinda of primary as StarWind VSAN features active-active mirroring. See more on the node's priorities viewtopic.php?t=5731.
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Wed Oct 09, 2024 11:15 am

i use starwind VM solution - not native install = how then I can put it to maintenance mode ?
this function is not available over web console
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Wed Oct 09, 2024 12:27 pm

and one more question
i always cleaned /mnt/... and ./headers/ dir
but after instaling native console on hyper-v and connecting to both virtual vsan appliances i see on one of them some artifacts (luns which not exist any more but console see some info = which i crated before for testing and deleted ) - they still exist in StarWind.cfg on one node - how i can clean that ?
can i simply delete StarWind.cfg - or other way ?

ok this I fixed by comparing files and removed this 3 lines about artifact luns

but still have question - how to put nodes into maintenance mode via command line ?
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Wed Oct 09, 2024 4:14 pm

cluster is OFF - in Windows (from cluster management console ) I replaced IP with my 1st vsan appliance node then :

Code: Select all

PS C:\Users\Administrator.MV> C:\Users\Administrator.MV\Desktop\MaintenanceModeON.ps1
C:\Users\Administrator.MV\Desktop\MaintenanceModeON.ps1 : Operation cannot be completed. Device has client connections.
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,MaintenanceModeON.ps1
so I have to detach all Iscsi connections before each planned shutdown ?
confused again ...
on both windows hyper-v this disks are offline after switching hyper-v cluster of from cluster console
Is any easy way to just shutdown properly storage providers with free release ?
yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Wed Oct 09, 2024 5:26 pm

how then I can put it to maintenance mode ?
There is a script there. The Free Windows-based installation also comes with GUI solely for monitoring purposes. You need to use StarWindX scripts there too.
(luns which not exist any more but console see some info = which i crated before for testing and deleted ) - they still exist in StarWind.cfg on one node - how i can clean that ?
Make sure that the restart precautions are followed (HA devices are synchronized and connected over iSCSI in MS iSCSI initiator)
1. Stop the starwind-virtual-san.service on one node once synchronization is over
2. Make a copy of StarWind.cfg.
3. remove the old entries from the *.cfg. under <devices> and <targets>
4. Start the service.
5. Wait for fast sync to complete.
6. Check if HA devices are connected over iSCSI from both nodes.
7. Repeat for the remaining VM.
so I have to detach all Iscsi connections before each planned shutdown ?
Yes, but you can try this

Code: Select all

$StarWind = 'SW-HCA-VM-01' #specify your StarWind VM (or host) ip-address
 
 
 
#Getting StarWind devices in to "Maintenance Mode"
 
Import-Module StarWindX
 
 
 
try
 
{
 
    $server = New-SWServer -host $StarWind -port 3261 -user root -password starwind
 
    $server.Connect()
 
    write-host "Devices:" -foreground yellow
 
   
 
    foreach($device in $server.Devices){
 
        if( !$device ){
 
            Write-Host "No device found" -foreground red
 
            return
 
        } else {
 
            $device.Name
 
            $disk = $device.Name
 
            if ($device.Name -like "HAimage*"){
 
                $device.SwitchMaintenanceMode($true, $true)
 
                write-host "$disk entered maintenance mode"
 
            } else {
 
                write-host "$disk is not an HA device, maintenance mode is not supported"
 
            }
 
        }
 
    }
 
}
 
catch
 
{
 
       Write-Host $_ -foreground red
 
}
 
finally
 
{
 
       $server.Disconnect()
 
}
 
 
on both windows hyper-v this disks are offline after switching hyper-v cluster of from cluster console
Is any easy way to just shutdown properly storage providers with free release ?
As a side note, Cluster disks have their thresholds for coming online. Please bring them up manually in Failover Cluster manager.
Make sure they are connected from the synchronized node.

P.s. You can always edit the post so it is easier to reply to the request.
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Wed Oct 09, 2024 5:49 pm

artifacts I cleaned manually

then maintenance off / on i found already that topic and looks like it works ( yes before rebooting / switching off all stuff first what i do is :
make all disks in cluster manager OFFline
stop cluster itself )
then this :
viewtopic.php?f=5&t=5664
looks like works (did not rebooted yet everything - will try asap )

and works after reboot behind fact that before had 4 connections per lun now have 8 & 12 :O (without any reconfiguration ) ....but no full resync and everything works - looks like works :)


csv1
High availability
12
Online
6 TB
iSCSI
iqn.2008-08.com.starwindsoftware:172.16.0.21-csv1, iqn.2008-08.com.starwindsoftware:172.16.0.22-csv1


witness
High availability
8
Online
5 GB
iSCSI
iqn.2008-08.com.starwindsoftware:172.16.0.21-witness, iqn.2008-08.com.starwindsoftware:172.16.0.22-witness

why multiple iscsi connections ? (after normal procedure - means attach new LUN to both hyper-v have normally 4 per lun ???)
yaroslav (staff)
Staff
Posts: 3111
Joined: Mon Nov 18, 2019 11:11 am

Wed Oct 09, 2024 8:23 pm

Great news!
StarWind VSAN establishes own connections: 3x connections per synchronization link (1x heartbeat 1x "data" (sycnhronization) and 1xcontrol for internal commands (connectivity)) and 2x connections per heartbeat link (1xheartbeat and 1x control).
mic77
Posts: 27
Joined: Fri Oct 04, 2024 9:58 pm

Thu Oct 10, 2024 11:09 pm

Is Starwind vsan free version as ubu virtual appliance supporting RDMA over Ethernet and iSER?
Post Reply