Best network setup 3nodes-HA for virtualisation. X540/X520 ?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

upgraders
Posts: 16
Joined: Mon Mar 24, 2014 12:22 pm

Thu Jul 31, 2014 4:10 pm

LOL I made the Software RAID mistake in another situation this year (not Clustering) but it's amazing how poorly they operate. After replacing it with a hardware raid.. poof all my problems were fixed. So good luck to you.. let hope that was the problem.
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Thu Jul 31, 2014 4:32 pm

It's recommended to avoid switches as they add operational latency, cost and failure points. However with a 3-node setup it's very difficult to have a fully interconnected mesh so going with a pair of 10 GbE NICs per host and a pair of 10 GbE switches is a proper way to go.
Don't quite get this? With 3 nodes and 2 x 10GgE per node, you can set-up a full sync mesh. Sure you need more NICs to handle other traffic such as normal LAN and possibly MPIO iSCSI from other servers.

Right?

Cheers, Rob.
Slingshotz
Posts: 26
Joined: Sat Apr 12, 2014 6:52 am

Thu Jul 31, 2014 6:29 pm

That is exactly the design diagram I was looking for as an reference over a month ago on how to setup our system! If I had that at the start, I wouldn't have had to try out so many different iterations of network design.

In your case, do you have eight network ports in each server? I was thinking of just buying another two 10GB port cards for each server to physically separate the iSCSI channel from the sync channel. I am running three 10K SAS 1.2TB drives in raid 0 in each server for the data as well as separate mirrors for the OS, but I have not tested the time for full syncs of all the CSVs. It only takes maybe 1-2 minutes to sync one 200GB and a 1GB CSV when I test a failure by disabling the network cards on one server.

How do you measure the IOPS in your system?
upgraders
Posts: 16
Joined: Mon Mar 24, 2014 12:22 pm

Thu Jul 31, 2014 8:54 pm

Well I have Dell R710's which have (4) 1Gbe nics built it, but I also have a 4 port 1GB nic (and another 2 port for another Synology SAN I use for backups) But in use, per the diagram, I have only been using two of the 1Gbe ports doing a direct link to the other two nodes for the iSCSI channel of Starwind. (the the Dual 10Gbe for sync channel) I have two available ports to add more initiators and performance, but I have not done that yet as I JUST ordered the hard drives and the new NIC cards. I am doing a cluster upgrade from 2012 to 2012R2 I will be busy in the next week or two myself doing all this.

I agree it would be nice to have had one of these diagram scenarios to visualize the paths on the Starwind site.. perhaps a nice "Getting started basics" Without the great support at Starwind, I don't think I could have figured this out on my own.

Jason
upgraders
Posts: 16
Joined: Mon Mar 24, 2014 12:22 pm

Thu Jul 31, 2014 8:57 pm

oop missed the IOPS question..

A few ways but the easiest is to install the free version of VeeamOne it has a great set of real time monitoring and reporting (and a GREAT hyperv monitoring tool set) . The other way is IO meter, bu you have to assign a drive to your CSV and push data through, when it already has a load... so it's not entirely accurate, unless you have some downtime.
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Thu Jul 31, 2014 9:46 pm

In your case, do you have eight network ports in each server? I was thinking of just buying another two 10GB port cards for each server to physically separate the iSCSI channel from the sync channel.
That's the design I was leaning towards- 10GbE for sync, 10GbE for iSCSI and final 1GbE for LAN.

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Aug 03, 2014 8:02 pm

10 GbE for sync and 1 GbE uplinks or 40/56 GbE for sync and 10 GbE uplinks are both proper configs. Going symmetric will of course also work but it's very difficult to make it run full uplink wire speed.
robnicholson wrote:
In your case, do you have eight network ports in each server? I was thinking of just buying another two 10GB port cards for each server to physically separate the iSCSI channel from the sync channel.
That's the design I was leaning towards- 10GbE for sync, 10GbE for iSCSI and final 1GbE for LAN.

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Aug 03, 2014 8:03 pm

SQLIO is another good one. Some people use SolarWinds products (funny, people tend to confuse them with StarWind and vice versa) for storage IOPS monitoring.
upgraders wrote:oop missed the IOPS question..

A few ways but the easiest is to install the free version of VeeamOne it has a great set of real time monitoring and reporting (and a GREAT hyperv monitoring tool set) . The other way is IO meter, bu you have to assign a drive to your CSV and push data through, when it already has a load... so it's not entirely accurate, unless you have some downtime.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Aug 03, 2014 8:04 pm

"Getting Started" is already re-published using V8 as a core and we're working on a Reference Design Guide. Good news: first one will be Dell R720/730 based :)
upgraders wrote:Well I have Dell R710's which have (4) 1Gbe nics built it, but I also have a 4 port 1GB nic (and another 2 port for another Synology SAN I use for backups) But in use, per the diagram, I have only been using two of the 1Gbe ports doing a direct link to the other two nodes for the iSCSI channel of Starwind. (the the Dual 10Gbe for sync channel) I have two available ports to add more initiators and performance, but I have not done that yet as I JUST ordered the hard drives and the new NIC cards. I am doing a cluster upgrade from 2012 to 2012R2 I will be busy in the next week or two myself doing all this.

I agree it would be nice to have had one of these diagram scenarios to visualize the paths on the Starwind site.. perhaps a nice "Getting started basics" Without the great support at Starwind, I don't think I could have figured this out on my own.

Jason
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Sun Aug 03, 2014 11:39 pm

anton (staff) wrote:"Getting Started" is already re-published using V8 as a core and we're working on a Reference Design Guide. Good news: first one will be Dell R720/730 based :)
When should we expect the release of the Reference Design Guide?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Aug 04, 2014 8:16 am

No ETA we just started.
awedio wrote:
anton (staff) wrote:"Getting Started" is already re-published using V8 as a core and we're working on a Reference Design Guide. Good news: first one will be Dell R720/730 based :)
When should we expect the release of the Reference Design Guide?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Mon Aug 04, 2014 10:50 am

Anton - you have a PM. Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Aug 05, 2014 11:42 am

I don't have anything neither here nor in Outlook Inbox :(-
robnicholson wrote:Anton - you have a PM. Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
monetconsulting
Posts: 5
Joined: Wed Jul 30, 2014 1:42 pm

Tue Aug 05, 2014 7:23 pm

Well, i installed LSI 9240-8i in each server. the raid is set at raid 0. I have cross over cables connecting the three servers at 10Gbe (two adapters per server). I have sync running on 10Gbe. Heartbeat on 1GB. I have setup a 1TB disk device and setup replica to the other servers.. Initial sync time for 1TB states over 9 hours. Am i missing something?
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Wed Aug 06, 2014 9:50 am

There have been other reports on here about slow synchronisation over 10Gbe. My lab is only using 2 x 1GbE but the sync times do seem slow there as well. An ballpark figure of raw network speeds can be found here:

http://techinternets.com/copy_calc

1TB at 10Gbit/s is just 15 minutes but your bottleneck here won't be the 10GbE network - it'll be the disks. Let's assume that your RAID-0 array can sequential write at 150MB/s (not out of order?) then to transfer 1TB will take ~2 hours.

So no matter what's going on here, a 9 hour sync time seems too slow. I've noted the time calculations on sync need some work. Even the elapsed time is inaccurate and whilst I accept all the caveats about estimated copy times, with the amount of data been copied, I'm sure the estimate can be better. The worrying thing is that I think it underestimates so your 9 hours could be a lot longer.

I'll do a test in my lab with a 1TB file. What format is it BTW? Flat or LSFS?

Cheers, Rob.
Post Reply