VLAN CONFIGURATION

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Sat Nov 13, 2010 4:15 pm

I would like confirm that I have setup my StarWind servers correctly for iSCSI traffic to maximize performance.

Current configuration is:

3 Hosts running VMware ESXI vSphere 4.1, with each host having two 1GB iSCSI connection; 1 connection to VLAN 1 on switch 1 and the other connection going to VLAN 2 on switch 2.

2 stacked 1GB port switches. Currently setup so that all VLAN 1 traffic is on switch 1 and all VLAN 2 traffic is on switch 2.

2 StarWind Servers runnning 2008 r2 with most recent general release version of StarWind Enterprise. I am primarily running HA devices and the HA sync channel is a direct connection over teamed 10GB nics. Each StarWind Server has 2 1GB nics for iSCSI; one is connected to VLAN 1 to switch 1 the other is connected to VLAN 2 to switch 2.

My questions are:

(1) Is my current iSCSI network setup correctly for the StarWind Servers:
Or should StarWind Server 1 be running teamed nics only to VLAN 1 and should StarWind Server 2 be running teamed nics only to VLAN 2?
Or should I/can I have 2 sets of teamed nics in each StarWind server with 1 going to VLAN 1 and the second going to VLAN 2?

(2) Should I add additional 1GB nics in each VMware host for iSCSI traffic and if so what is the recommended way to do so?

(3) Should the switches be setup so that the VLAN traffic is split between switches for redundancy; for example teamed iSCSI traffic from StarWind Server 1 goes to port X on switch 1 and port X on switch 2 with both of the ports being assigned to the same VLAN?

(4) Should ports assigned to VLANs be tagged 802.1Q or untagged. And should the nics/teams on the servers that are going to VLAN be tagged to the VLAN?

Thank you for any assisstance
Constantin (staff)

Mon Nov 15, 2010 10:40 am

1. Both HA nodes should have NIC to both VLANs otherwise failover will be not handled.
2. It`s recommended solution - to use dedicated NIC for iSCSI. But in case of VLAN you`ll need to use 2 NICs per server
3. For redundancy - of course you should do it!
4. Well, if all your hardware supports 802.1Q - enable it.
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Mon Nov 15, 2010 1:02 pm

As a followup to the answer to question 1. "Both HA nodes should have NIC to both VLANs " - Can I have teamed nics to each VLAN?
Constantin (staff)

Mon Nov 15, 2010 3:12 pm

You cannot create teaming with NIC1 in VLAN1 and NIC2 in VLAN2. Sorry :)
hixont
Posts: 25
Joined: Fri Jun 25, 2010 9:12 pm

Mon Nov 15, 2010 6:06 pm

My Starwind/Virtual Host farm configuration has two NICs per virtual host dedicated to iSCSI and two NICS on each of my HA Starwind SAN servers.

NIC1 on each of the hosts and the SANs are in VLAN1. All ofthe virtual hosts have load balancing/failover configured for both Starwind servers on VLAN1 via the iSCSI initiator.

NIC2 on all of the servers are in VLAN2 and again load balanced/fault tolerant via the iSCSI initiator.

The only NIC teaming I have employed is on the Starwind SAN synchronization channel which is a dedicated pair of 10Gb NICs that are direct connected to each other via copper.

In this configuration I can lose a SAN server, a switch or a NIC and not see any performance/connectivity/data loss.
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Mon Nov 15, 2010 7:33 pm

hixont, thank you for your reply

Constantin, I am confused to why you have this Technical Paper "StarWind iSCSI SAN Software: Implementing NIC Teaming on a StarWind Machine". When would nic teaming be used?
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Tue Nov 16, 2010 1:19 pm

dmansfield wrote:hixont, thank you for your reply

Constantin, I am confused to why you have this Technical Paper "StarWind iSCSI SAN Software: Implementing NIC Teaming on a StarWind Machine". When would nic teaming be used?
AFAIK, the advice to team NICs was for the sync channel only, and I'm not sure if it still applies to Starwind 5.5.

For your iSCSI clients and the network between them and your Starwind servers, you should not use teaming, at least on Windows, as MS specifically recommend against it. Use MPIO instead. Because Starwind 5.5 uses the MS iSCSI initiator for sync, I'm unsure about the teaming advice. Unfortunately 5.5 doesn't yet support MPIO for sync, in which case, teaming is necessarry for redundancy, but it goes against MS recommendation.

Unless you intend to keep your Starwind boxes far apart, there's no need for a switch between them for the sync network.

You only need to worry about VLANs if you aren't going to have switches dedicated to iSCSI, or if you have another need that requires VLANs like stretching multiple networks across multiple switches. Also, you don't need to worry about defining the VLAN on the NIC level (i.e. in the Windows / vmware NIC settings for that NIC) unless you are short of NICs and need to share them. So configuration of your VLANs can be an entirely switch based affair, unless you have to run multiple VLANs on one NIC (which you might be doing to keep your VMs segregated from each other).

If you don't have dedicated iSCSI switches, then definitely you should put the iSCSI networks into their own VLAN(s). You could have a seperate VLAN for each path (so if you have two switches, that means two NICs in the clients and two NICs in the Starwind boxes + x NICs for sync) or put them in the same VLAN - just don't share that VLAN with non-iSCSI traffic, unless that other traffic is very light weight. On the IP side, put your iSCSI stuff into different subnets.
Constantin (staff)

Tue Nov 16, 2010 5:17 pm

dmansfield, sorry it was misunderstood on my side. :)
Aitor, thanks for help! :)
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Fri Nov 19, 2010 5:02 pm

Aitor - Thank you for the reply, you are always a wealth of knowledge. As a follow up to your post, I am using dedicated iSCSI switches. To clarify your comment I don't need to use VLAN tagging at the StarWind and VM nics but should establish VLANs at the switch? Or do I just rely on the IP addressing to take care of the VLAN such as xxx.xxx.101.xxx for VLAN 1 and xxx.xxx.102.xxx for VLAN 2? Also do recommend not having a connection to the LAN switch from the iSCSI switch?

Anton - Can you comment on using teaming on the sync channel?
Constantin (staff)

Mon Nov 22, 2010 2:08 pm

We strongly recommend to use teaming of sync channel cause it`s the only way for hardening of sync channel.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Mon Nov 22, 2010 5:26 pm

dmansfield wrote:As a follow up to your post, I am using dedicated iSCSI switches. To clarify your comment I don't need to use VLAN tagging at the StarWind and VM nics but should establish VLANs at the switch? Or do I just rely on the IP addressing to take care of the VLAN such as xxx.xxx.101.xxx for VLAN 1 and xxx.xxx.102.xxx for VLAN 2? Also do recommend not having a connection to the LAN switch from the iSCSI switch?
If you have a dedicated switch(es) for iSCSI, you don't need VLANs at all. Example set up:

1 switch dedicated to iSCSI
x number of iSCSI initiators (clients) connected to iSCSI switch. Any other network connections they have go to other switches
2 Starwind servers connnected to iSCSI switch (and connected to each other using connections that don't go via the switch.

You give each network connection it's own ip address for iSCSI. These should be in a seperate subnet to your other networking. Imagine that you use 10.0.1.0/24 for iSCSI - you could make your first Starwind box 10.0.1.1 and the second 10.0.1.2, and your initiators could use ips in the same subnet.

The sync network could be a different subnet - maybe 10.0.1.0/24.

Now, assuming that these are managed switches, you will need to give the switch an ip address in the 10.0.1.0 network or you won't be able to manage it. You will only be able to manage it from one of your starwind boxes or from one the iSCSI initiator machines - so no connection to your LAN switch.If that's not acceptable, you will need a more complicated setup with VLANs somewhere.

For instance, you could, on your LAN switch, set up VLAN 10 as being your iSCSI VLAN. Make one port of your LAN switch an untagged member of that VLAN, and make VLAN 10 the default VLAN for that port, and make sure that that port is not a member of any other VLANs. This way the iSCSI switch still doesn't have to have VLANs configured. You will then have to make another port a member of the LAN switch a member of the same VLAN 10, but it might be a member of other VLANs too, and it might need to be a tagged member, depending on what you had connected to it - and that would be connected to a management server or whatever needed to talk to your iSCSI switch and your iSCSI network via the LAN switch. Your iSCSI traffic would still be segregated, and you wouldn't have to worry about VLANs on the iSCSI switch.

It's a lot easier to do this than explain it clearly, sorry!

As for teaming on the sync connection, in Windows, teaming is a function of the NIC driver, not Windows itself. So go with a well supported supplier; I guess that these days that means Intel or Broadcom, and thoroughly test the team with sync going over the top as there are quite a lot of different teaming options. You'll want the same NIC model and driver version on both nodes for sanity's sake!
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Dec 04, 2010 4:48 pm

VLAN is great. But having dedicated hardware and wiring for iSCSI traffic is a better idea. YES, iSCSI can co-exist with any other TCP traffic on the cable but it's a poor man's solution really... Own physical network is faster, safer and more secure to manage and run and not that expensive (hey, we're not FC shop!) to implement. Win-win solution. At least to me :-)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Tue Dec 07, 2010 1:16 pm

There are some things that can only be done by using VLANs. E.g say you are connecting two physical locations together over just one connection(or two for diversity), you can run multiple, logically seperate networks over that same connection using VLANs.

Performance: in my experience there has been no impact on performance, but that's going to come down to the switch performance and how many VLANs you have defined. But at the ethernet level, a VLAN means 4 bytes on every frame, which is almost nothing (only 12 bits of those four bytes are for the VLAN id). So the performance impact is more the extra processing the switch has to do, rather than bandwidth consumed. Certainly, on my switches, which have maybe 20 VLANs defined, I have no trouble getting wire speed performance.

Security: there should be no difference betwen a switch divided up into VLANs and seperate switches, but there is indeed a greater risk - if there's a security bug in the firmware that allows one VLANs packets to cross into another without you explicitly permitting it; and more likely, if you make a mistake in configuration.

Reliability: here's where you start to get into the drawbacks. Let's say that a switch running 2 VLANs fails. You've got to replace it with an identical switch and restore the config from a backup. Or, if you can't get an identical switch, you have to configure your new switch exactly the same way. If you had a dedicated switch for each network, you can swap a failed switch much more easily.

The bottom line is: if your switch is a proper managed switch (e.g. pretty much every 10G capable switch on the market) then you shouldn't see a performance hit, however the price is that you have a more complex setup. As always, make sure you understand what you are doing, test your configuration to confirm that it is doing what you expected, and have a backup of the configuration as well as documentation.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Dec 15, 2010 6:12 pm

I did not say VLANs are bad! Better VLANs then mixed traffic on the same physical and logical network. I just said it's better to have dedicated wire to run iSCSI with (pretty much obvious...). But not always possible unfortunately :(
Aitor_Ibarra wrote:There are some things that can only be done by using VLANs. E.g say you are connecting two physical locations together over just one connection(or two for diversity), you can run multiple, logically seperate networks over that same connection using VLANs.

Performance: in my experience there has been no impact on performance, but that's going to come down to the switch performance and how many VLANs you have defined. But at the ethernet level, a VLAN means 4 bytes on every frame, which is almost nothing (only 12 bits of those four bytes are for the VLAN id). So the performance impact is more the extra processing the switch has to do, rather than bandwidth consumed. Certainly, on my switches, which have maybe 20 VLANs defined, I have no trouble getting wire speed performance.

Security: there should be no difference betwen a switch divided up into VLANs and seperate switches, but there is indeed a greater risk - if there's a security bug in the firmware that allows one VLANs packets to cross into another without you explicitly permitting it; and more likely, if you make a mistake in configuration.

Reliability: here's where you start to get into the drawbacks. Let's say that a switch running 2 VLANs fails. You've got to replace it with an identical switch and restore the config from a backup. Or, if you can't get an identical switch, you have to configure your new switch exactly the same way. If you had a dedicated switch for each network, you can swap a failed switch much more easily.

The bottom line is: if your switch is a proper managed switch (e.g. pretty much every 10G capable switch on the market) then you shouldn't see a performance hit, however the price is that you have a more complex setup. As always, make sure you understand what you are doing, test your configuration to confirm that it is doing what you expected, and have a backup of the configuration as well as documentation.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply