Considering switching from FreeNAS

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

buzzzo
Posts: 13
Joined: Wed Jan 20, 2016 6:39 pm

Mon Mar 14, 2016 3:32 pm

Hi

FYI: i've setup 2 node xenserver cluster with 2 iscsi interface on each node and by using iftop on the iscsi interface i can assure you
that iscsi traffic flow correctly on both interfaces.

As a side note: xenserver is just linux, so it's multipath is standard linux multipath.
I never heard that linux multipath does not perform it's job: splitting traffic on multiple paths.

If you want I could provide you details on my xenserver cluster.
Dmitry (staff)
Staff
Posts: 82
Joined: Fri Mar 18, 2016 11:46 am

Mon Mar 21, 2016 4:05 pm

@darklight and @buzzzo thank you very much for your help, we are really appreciate it.
@cabuzzi any updates for us please?
webguyz
Posts: 23
Joined: Sat Jan 26, 2013 12:34 am

Sun Jan 22, 2017 8:19 pm

Sorry for getting into an old thread but I am puzzled by something. If I have a single SAN or NAS and using MPIO and have it set for Round Robin, I will only ever see the max size for one of the interfaces, correct? lets say I'm conneced to a single SAN via MPIO with 2 gigabit ethernet connections with RR. First one ethernet then the other will connect so max I will see is 1 gigabit on any interface. If I have 4 gigabit connections and bond them the most I will see on any connection is still 1gb, but I will have a bigger pipe so more vm's can simultaneously connect, but still max speed is 1 gb on any one on MPIO connection

Can Starwind connect two gigabit connections using MPIO and have them send SIMULTANEOUSLY so I get 2 gb throughput on a single server?

From what I can tell the use of MPIO in Starwinds case is for HA where you have 2 SAN's or NAS's and you connect 1 MPIO connection to one SAN and the second MPIO device to the second SAN. You still will get only 1 gb throughput but you get HA by relicating the data

On a single server SAN you don't gain much with MPIO other than failover of a switch if each of your MPIO connections go thru different switches.

This thread seemed to about getting better performance with RR but I didn't think that was possible. If it is please enlighten me.

Thanks
Michael (staff)
Staff
Posts: 317
Joined: Thu Jul 21, 2016 10:16 am

Wed Jan 25, 2017 12:54 pm

Hello webguyz,

StarWind VSAN just creating the targets that can be connected in iSCSI Initiator. With enabled MPIO and chosen Round Robin policy in iSCSI Initiator, the system using all available paths in a balanced way.
It means that if you have several 1Gbps network interfaces and you configured them to connect the storage, firstly, you will get fault tolerant connectivity to storage, also you will get a "bigger pipe" and should get increased performance in case if it is not limited by the storage.
StarWind HA performance also can be limited by underlying storage performance on each node and synchronization channel throughput.
Post Reply