Access Rights for Cluster

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
qwertz
Posts: 36
Joined: Wed Dec 12, 2012 3:47 pm

Sun Feb 25, 2018 2:17 pm

Hi there!
At first: Thanks again for this great Software, I love it and what it enables me to squeeze out of my hardware!

I have a "question" (maybe its more of a long story with a couple of things I didn't fully understand):
I am wondering about Access-Rights for my 2-node hyperconverged Cluster at my home.
Its based on 2012r2, tiered storagespaces and StarWind Virtual SAN v8.0.0 (Build 11456, [SwSAN], Win64).

The Clusternodes each have 2x 10G-Connections in a LACP-Trunk that carries a VLAN-Trunk.
Each node has several vNics that "connect" them into their respective VLAN. Some of importance for this scenario are:
- Node1 has the 1 in the last octed, Node2: you guessed it, 2.
- Server-Network: VLAN 1 / Subnet 172.17.1.0/24 / Purpose: Client-Conenctions for the Cluster, Heartbeat
- Storage-Network : VLAN 2 / Subnet 172.17.2.0/24 / Purpose: iSCSI
- Cluster-Network: VLAN 3 / Subnet 172.17.3.0/24 / Purpose: Cluster-Stuff, like Live-Migration

The Clusters are running since... I dunno perhaps 3-4 Years and I´ve upgraded through several Starwind Releases.
But lately I was having a look at the "Access Rights" Tab and figured out there wasn't much left from what I initially configured there:
At least 10 Rules were added to it automatically, some of them allowing stuff I dont want.
So I decided its time to clean up there.

At first I wrote a rule on node1 that allows everything for the cluster and moved it to the top:
Source: 127.0.0.1 as well as 172.17.2.2 ; Destinations: quorum, virtual-machines, data-store1 and datastore2 ; Interface: 127.0.0.1 as well as 172.17.2.1

Then I made the same rule on node2 but changing the node-ip accordingly and also moved it to the top:
Source: 127.0.0.1 as well as 172.17.2.1 ; Destinations: quorum, virtual-machines, data-store1 and datastore2 ; Interface: 127.0.0.1 as well as 172.17.2.2

Then I had the idea to put one of the nodes in Failover-Cluster to "maintenance mode", so that everything is happening on one node: just in case I end up with a split brain. (Maybe this should have been the first thing, but anyway.)

Then I began to remove every other rule one by one, monitoring the status of the replication as well as the hearbeat status: Everything cool;
BUT: I was wondering about the Hearbeat-Connection: There is no rule that allows to Connect to the targets through the Server-Network.
- Question1: Do I need to write a Rule that allows connections via the server-network for Heartbeat-Purpose?

I kept wondering but continued to remove all other rules except the DefaultAccessPolicy (DENY ALL) and the one listed above. Still, everything continued to run smooth.

Since everything kept running smooth, and I expected it to blow-up (since there is no rule that allows the Heartbeat-Connections) I even did a reboot of the "paused" clusternode.
- Question2: My thought was: "Maybe the Access Rights don't affect already established connections..." Is this correct?

Still, everything kept running, it synced up again like normal BUT: I discovered that after the node reboot 4 rules were added again automatically ?! Why?
They basically contain what I wrote into my single rule, but again it allows connections through any interface. How to address this? Do you see mistakes in the Rules I created?
Other thoughts?

Thanks in advance... and have a great sunday afternoon ;)
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Mon Feb 26, 2018 8:15 pm

1. Access rules are configured to allow/deny client connections via iSCSI. They have nothing to do with the purposes that Heartbeat serves. Access rights affect targets, while Heartbeat is used by the StarWind VSAN service itself.
2. What rules reappeared in your case?

P.S. LACP is a bad practice for any iSCSI connections, including Synchronization/replication and client access links. I would suggest avoiding that, as it introduces increased latency.

P.P.S. Do I get it right that all three VLANs are configured to run through the same 2x10Gbit LACP trunk?
Post Reply