LAN set up with a 2 node Storage Cluster

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Van Rue
Posts: 18
Joined: Fri Feb 05, 2016 6:04 pm

Tue May 24, 2016 7:38 pm

Pre-Installation questions:
I have 2 DL380's as intended VSAN storage, both running Win Server 2012 R2. I have a total of 8x Broadcom 1gb LANS on each box.
each server has 1 SAS RAID 10 array, 1 SATA Raid 10 array, and an attached SATA array in Raid 6.

I have 2 Layer 2 switches, 48 port, and a under utilized 24 port (both with VLAN capabilities).

I plan to use Windows SMB and Network Teaming built in for client and internet access.

What is the best way to assign VLANS and 8x LAN routes for Starwind VSAN in a 2 node storage cluster ? Most my IO is very frequent but very small. I will have 8 VMs only eventually on a separate compute 2 node cluster.
PoSaP
Posts: 49
Joined: Mon Feb 29, 2016 10:42 am

Thu May 26, 2016 8:57 am

Generally, your synchronization channel bandwidth should be equal to iSCSI channel(client connections).
But 1GbE LAN is not enough for Sync/iSCSI channels.
So you need to use few 1GbE cards for a Sync/iSCSI channels.
For iSCSI channel you can use your two switches.
For Synchronization channel would be better direct connections.
Few documents, which can help you with:
https://www.starwindsoftware.com/techni ... tarted.pdf
If you have a Hyper-converged scenario:
https://www.starwindsoftware.com/techni ... manual.pdf
If separated scenario:
https://www.starwindsoftware.com/techni ... luster.pdf
Dmitry (staff)
Staff
Posts: 82
Joined: Fri Mar 18, 2016 11:46 am

Mon May 30, 2016 9:51 am

@PoSaP thank you for your answer.
@Van Rue, do you have any additional questions for us please?
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Mon Aug 22, 2016 1:07 pm

I have open a new thread here https://forums.starwindsoftware.com/vie ... f=5&t=4552 for this. so please disregard this post for answers

Hello, I kind of have similar setup in mind as OP , but need some specific info on setting thing up.
if I need to create my own thread please let me know.

my current situation is this:

I have a DELL R730dx Server. Dual 6-core CPU , 32GB RAM.
Windows 2012 R2 Hyper-V setup on 2x200GB SSD in RAID-1 + 4x2TB HDD in RAID-10
Future Plan to add a second server in the same config to have a HA fail-over cluster.

what I am trying to do right now, is build out a single node Hyper-V cluster using server above + StarWind NFR license so when the second server come is
I can just configure it and add it to the cluster hot-plug style.

This is my first cluster build, let along my first build with StarWind, so I am getting a bit confused on things like how to configure my network and if I did configure SW properly to be used with Cluster setup.
what makes it even more frustrating is that I only have 4 1Gb NIC interface on the server and everything I read says it is not enough. but adding 10Gb hardware is not in the stars for the nearest future. we just had a hardware update few month ago to the 1Gb. i.e. did the site rewire to cat-6e with proper layout and updated all end points to cat-6 rj-45 jacks. replaced all switches to 1Gb managed from netgear, etc.

my questions are as follow:

#1. providing I must stick with using 1Gb interfaces, how essential it is to double up the nic per server (i.e. get the new server with 8x1Gb, and get a second 4 port card to the current server)

#2. what would be the proper way to configure the network on the server for 2-node HA cluster setup I plan.
please give me the config scenario using the IP address I actually can use.
my primary network is 192.168.1.0/24 my first server IP is 192.168.1.8/24

NOTE : I have tried a config based on a how-to I found but got a bit confused in the middle of it hence I am here.
what I have so far is, I created a Team on NIC1.
using Power Shell I configred a switch and 3 vNIC on the team.

Code: Select all

## Creat new switch on a team network
New-VMSwitch -Name ConvergedHVSwitch -NetAdapterName HVTeam -AllowManagementOS $False -MinimumBandwidthMode Weight


### Create Virtual Network adaptors
Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "ConvergedHVSwitch"

Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "ConvergedHVSwitch"

Add-VMNetworkAdapter -ManagementOS -Name "CSV" -SwitchName "ConvergedHVSwitch"


### Set vNIC weight
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40

Set-VMNetworkAdapter -ManagementOS -Name "CSV" -MinimumBandwidthWeight 5

Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 5



New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 192.168.1.8 -PrefixLength "24" -DefaultGateway 192.168.1.2

what would be my next step with this config?

#3 how do I configure the remaining NICs to be used by StarWind and Hyper-V cluster?


my short term goal is to have a single node hyper-V cluster up and running with StarWind vSAN as soon as possible.
this way I can push for second server to be ordered sooner rather than later.

my long term is to have the setup above up ASAP and adding the second server to the mix with in a month. two at most.

Thanks
Michael (staff)
Staff
Posts: 317
Joined: Thu Jul 21, 2016 10:16 am

Fri Aug 26, 2016 6:51 pm

Dear community,
This thread was moved to https://forums.starwindsoftware.com/vie ... f=5&t=4552
Post Reply