Page 1 of 1

MPIO usage on Teamed 10GB Interfaces

Posted: Tue Dec 18, 2018 6:24 am
by ColdSweat25
I'm putting together a 2-Node Hyper-converged Hyper-V Cluster using V San Free.

I have x2 1GB Management/Access interfaces and x2 10GB Interfaces on QLogic cards.

I have used 2016's Switch Embedded Teaming on both sets of interfaces. The 10GB interfaces are patched direct between servers with no in between devices with SM fiber. From this vSwitch I created 3 Virtual adapters on the ManagementOS side, (Heartbeat, "Storage" Sync Channel, VM Live-Migration) each on separate subnets. I created two HA vDisks using the Heartbeat/Sync Interfaces/Subnets.

Is it even possible or advisable to use MPIO under this setup?

-On a side note does anyone know where the setting for jumbo packets is on QLogic cards?

-Thanks

Re: MPIO usage on Teamed 10GB Interfaces

Posted: Tue Dec 18, 2018 10:15 am
by Oleg(staff)
First of all, we do not recommend any form of NIC teaming on ISCSI and Synchronization channels for StarWind VSAN. It is better to use two separated links instead of the team.
Yes, it is advisable to use MPIO, but without teaming.
-On a side note does anyone know where the setting for jumbo packets is on QLogic cards?
You can do it via advanced network settings or with the help of PowerShell commands.
But QLogic cards have problems with Jumbo frames on some versions of firmware. I would recommend you to keep Jumbo frames disabled.

Re: MPIO usage on Teamed 10GB Interfaces

Posted: Tue Dec 18, 2018 10:58 pm
by ColdSweat25
Thanks for the Quick Reply.

I'll keep the links un-teamed then. Is it still ok to use virtual adapters to create separate Hearbeat/Sync channels over both NICS?

My Qlogic cards are brand new so I'll test out jumbo packets to see if I have any issues. Is there a ps script to check sync performance?

Re: MPIO usage on Teamed 10GB Interfaces

Posted: Wed Dec 19, 2018 9:43 am
by Oleg(staff)
There is no need for additional virtual adapters. You can use 1*10GB link as synchronization and the second one as ISCSI/HeartBeat and assign IPs on physical adapters. Also, you can assign additional HeartBeat via Management.
Please be aware that sometimes Jumbo frames are not working properly with QLogic cards.
You can check synchronization channel performance with the help of Iperf, for example. Also, the speed of synchronization depends on underlying storage performance on both nodes.