Do I set iSCSI Initiator - Devices button - MPIO to localhost on all discovered targets?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
dcadler
Posts: 10
Joined: Wed Jun 03, 2015 9:12 pm

Tue Aug 11, 2015 10:37 pm

In the Starwind technical paper, titled "StarWind Virtual SAN® Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster
DATE: JANUARY 2015", on page 35, step 13, it states "After both StarWind nodes are restarted and the MPIO support is enabled you have to configure the MPIO policy for each device specifying localhost (127.0.0.1) as the active path."

Do I do this for all discovered targets on the iSCSI initiator Targets tab, in the Discovered targets section? I have the 3 discovered targets for the local Witness, CSV1 and CSV2 targets plus the three replicated targets on the 2nd node. So, do I click on all 6 discovered targets, one at a time and with each target selected, click the Devices button,, then the MPIO button, then make the localhost path the active path. Even for the 3 targets on the other node?

And then, on the other node with the replicated devices, do I set all 6 of their MPIO paths to make the localhost path active on them as well?

It seems strange to make the targets on the remote server have an MPIO path of localhost active.

Thanks in advance for any help.

Dave
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Aug 12, 2015 12:13 pm

Hi Dave,

I just connect local ones locally via loopback and the remote ones remotely using according initiator and target portal IP's. And Round Robin policy. It's just faster :)
dcadler
Posts: 10
Joined: Wed Jun 03, 2015 9:12 pm

Wed Aug 12, 2015 2:33 pm

OK, setting the MPIO policy device path for each local iSCSI Initiator target to the 127.0.0.1 loopback and the MPIO policy device path for each remote iSCSI Initiator target to their respective remote IP addresses makes sense.

As another note; I am setting up a 2-node Hyper-V failover cluster. The Starwind document, that I mentioned in my opening post, says to set the MPIO load balance policy to "Fail Over Only". Are you saying I should use "Round Robin" instead?

Thanks,

Dave
Vladislav (Staff)
Staff
Posts: 180
Joined: Fri Feb 27, 2015 4:31 pm

Thu Aug 13, 2015 11:18 am

Hi Dave,

RoundRobin is a good choice if sync channel is extremely fast or at least faster then storage where image files are located. RR policy carefully balances workload between to servers.

If sync channel is not super fast, then FailoverOnly is definitely the choice! Because this policy excludes slow network.

Hope it makes sense.
dcadler
Posts: 10
Joined: Wed Jun 03, 2015 9:12 pm

Thu Aug 13, 2015 2:58 pm

I have two 10GbE NICs in each of the two Hyper-V servers, each has a vlan for SYNC and a vlan for iSCSI. Does that qualify as fast?
Vladislav (Staff)
Staff
Posts: 180
Joined: Fri Feb 27, 2015 4:31 pm

Fri Aug 14, 2015 10:15 am

Ok, 2 10 Gbps NICs are fast enough if storage is slower. BUT there is another concern - VLANs (sync and iSCSI assigned to the same physical NIC) may cause instability, at least, it was confirmed a few times by our techs. Therefore, you are welcome to use VLANs, but in this case FailoverOnly is the option for you.
dcadler
Posts: 10
Joined: Wed Jun 03, 2015 9:12 pm

Fri Aug 14, 2015 4:03 pm

OK, I have switched the MPIO policy to Fail Over Only. Once thing I noticed while stepping through the iSCSI Initiator Targets is as follows;

I have two servers, which will participate in a Hyper-V Failover Cluster
1. HV1
2. HV2
I have 4 devices defined in Starwind
1. Witness - 1GB Virtual Hard Drive (Defined on HV1, Replication Partner is on HV2)
2. CSV1 - 1.99TB Virtual Hard Drive (Defined on HV1, Replication Partner is on HV2)
3. CSV2 - 1.99TB Virtual Hard Drive (Defined on HV1, Replication Partner is on HV2)
4. ClusValidation - 1GB Virtual hard Drive (Defined on HV1, Replication Partner is on HV2)

So, in my iSCSI initiator, on the Targets tab, on both servers, I see the following targets;
iqn.2008-08.com.starwindsoftware:hv1.company.lan-witness
iqn.2008-08.com.starwindsoftware:hv1.company.lan-csv1
iqn.2008-08.com.starwindsoftware:hv1.company.lan-csv1
iqn.2008-08.com.starwindsoftware:hv1.company.lan-clusvalidation
iqn.2008-08.com.starwindsoftware:hv2.company.lan-witness
iqn.2008-08.com.starwindsoftware:hv2.company.lan-csv1
iqn.2008-08.com.starwindsoftware:hv2.company.lan-csv1
iqn.2008-08.com.starwindsoftware:hv2.company.lan-clusvalidation


So, when I am configuring MPIO on the HV1 server, I first select each hv1.company.lan-... target, set the MPIO policy to Fail Over Only and make sure that the 127.0.0.1 connection is active, since they are local to that computer.
However, once I do that and then look at the MPIO policies on the hv2.company.lan-... targets on server HV1 (which are the remote targets), that already have the policy set to Fail Over only and they have the 127.0.0.1 connection active. If I try to change them to point at the LAN adapter IP for the local IP of either of the two 10GbE adapters, it then changes that as being the active connection on the respective HV1 target.

The same thing happens on HV2.

So, in effect, I can really only set the MPIO policy on the local targets of each server.

Is that the way it is supposed to work?

Please note that I pass the cluster validation test 100% when configured this way.

Thanks,

Dave
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Mon Aug 17, 2015 2:39 pm

Hi Dave,
Since local connection is preferable for failover only, it works this way, right.
Vladislav (Staff)
Staff
Posts: 180
Joined: Fri Feb 27, 2015 4:31 pm

Wed Sep 16, 2015 8:48 pm

Thank you for helping out, darklight :)

Is there anything else we can help you with, Dave?
Post Reply