Mon Jun 30, 2014 10:16 am
Hi Starwind guys,
I'm setting up a 3 node H.a. cluster that will become my main SAN for my VDI project. I have some questions regarding the networking aspect, and how performance is affected.
I need to decide between the following;
Is there any performance difference between using 10gb SFP+ D.A. (Intel X520-DA2) and using 10gbase-t (RJ45!) (Intel X540-T2) for the sync and data channels?
The only real difference between SFP+ and RJ45 would be the latency; SFP+ is about 0.3 microseconds per link, RJ45 10gbase-t should be about 2.6 microseconds per link. The HA networking guidelines state that in the same building connections between nodes should be less then 5 ms. (MILIseconds!) Both adapters/standards should easily offer less then a milisecond latency between nodes, even with multiple switches in each link.
The question for me is; Will there be any difference in performance at all? Or would both offer the same real world performance? (for VDI!/High performance virtualization)
Next question is regarding the use of a switch for the sync. channels. VS. a direct connection between all three nodes.
Each node has 2 nic's, so when using direct connections there will be a 10gb link between all three nodes. (node 1 will have 10gb Direct connection to both node 2 and 3.)
This should work, however, when using (redundant, so 2) switches, I would be able to get 1x10gb connection per node to each switch. This would in theory be a far better choice because 2x10gb to switch would offer much better performance then 10gb to each node right? And the second thing; when one node fails, the other 2 nodes still have 2x10gb for sync, while with direct connections there will be only 10gb syncing left in case of a node-failure. (By the way, with all 3 nodes up, the only way to get from node A to B without using the direct connection is through node C, but will the adapters even be able to do this kind of switching? using a switch should be much better right?)
Last questions; the sync channels should have dedicated nic's, i understand that, but what abou the switches used for syncing? It should be fine to use the same 2 switches that are used for the data/heartbeat channel and general network traffic right? Or would i need extra switches just for syncing?
I understand that I will need 2 switches for redundancy and MPIO, so lets say the 2 switches contain enough ports for the 3 nodes, and the clients that will connect to them, this will be all I need in terms of switching for my SAN network right?.
Any tips on what to look for in a switch? I know jumbo frame is important, but my adapters even offer 11 or 12k frames instead of 9k (jumboframes), should i get a switch that supports this, for optimal performance? Does the switch need to be fully managed? or is smartswitch okay? any special ISCSI things to look for?
So, to sum it up;
is using 2x2port 10gbase-t intel X540 adapters per node, connected to 2 redundant 10gbase-t switches recomended in terms of performance ?
Would there be any reason to use SFP+ or direct connections between the hosts?
CloudBuilder33