Home/lab network build

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
solian
Posts: 3
Joined: Fri Jan 05, 2018 2:55 am

Fri Jan 05, 2018 8:32 pm

Hello,
We've been doing some decommissioning and clean-up at work and I've scavenged some CPU, RAM, and storage from the "get rid of it" pile. I bought a few things new and also some used parts off eBay to round out my gear and come up with 2 identical servers.

The cluster will be to support my home network needs as well as a home lab to try out new things and grow my knowledge. I travel from time to time and when there is a problem with my home network my wife and kids are very upset with me that they have no internet (and now TV since I shaved the cord and built my own DVR solution).

Having HA at home has been a long time dream of mine so with the equipment below and stumbling across Starwind I think I can make it a reality.

I'll be using vSpehere 6.0 for now as I have access to a full license. Over the last couple of days I've read the Starwind Best Practices Guide back to front and several of the other technical papers on Hyper-Convereged two node scenarios.

Hardware (everything is x2):
Intel Xeon E5-2650 (8c/16t @ 2.00ghz)
Supermicro X9SRL-F
96GB DDR-1333 ECC RDIMM (16GBx6)
Intel i350-T4
Intel X520-DA2
HP P410 w/ 1GB FBWC
IBM M5015 w/ 512MB BBW
Cyberpower 1500VA UPS

Networking:
vSwitch0 vmnic0 onboard Intel 82574 port 1
vmkernal ESXi Management 192.168.1.20/24

vSwitch1 vmnic1 onboard Intel 82574 port 2
vm network VSA Heartbeat-1

vSwitch2 vmnic2 Intel X520-DA2 port 1
vm network VSA Sync-1 10.10.11.0/24
vm network VSA iSCSI-1 10.10.21.0/24
vmkernal iSCSI-1 10.10.11.10/24

vSwitch3 vmnic3 Intel X520-DA2 port 2
vm network VSA Sync-2 10.10.12.0/24
vm network VSA iSCSI-2 10.10.22.0/24
vmkernal iSCSI-2 10.10.12.10/24

vSwitch4 vmnic4 Intel i350-T4 port 1
vmkernal vMotion 10.10.1.1/24
vm network VSA Heartbeat-2

vSwitch5-7 vmnic5-7 Intel i350-T4 port 2-4
vm network guest network1-3 10.10.101-103.0/24

Storage:
ESXi boot from USB
HP P410
4 x Micron M500 120GB SSD in RAID10
IBM M5015
2 x Intel DC S3500 300GB SSD in RAID1
2 x Seagate Constellation.2 1TB HDD in RAID1
4 x Hitachi 2TB HDD in RAID10

The HP P410 would be local storage for the host to hold a Windows 2012R2 DC, a Cyberpower UPS management appliance, and a Starwind VSA on Windows 2012R2

The IBM M5015 would be PCI Passthrough to the VSA. The ~1TB array would be configured as an LSFS virtual disk for guest OS vmdks and the ~4TB array would be a standard flat disk for VMs that need extra storage. The S3500s would be used as L2 cache for the LSFS disk set to write-back since the S3500 has capacitor backed power loss protection.

VMs would be the aforementioned ones on local storage that wouldn't need the benefit of HA or vMotion. VMs on shared storage for HA would be a vCenter appliance, a Sophos UTM firewall, a Sophos XG firewall, various servers for my DIY whole home DVR setup, a MySQL server for my Kodi clients, an OpenMediaVault NAS for basic file sharing duties, and some DMZ servers for a few things I host for friends and family like Minecraft, Teamspeak, etc.

Everything will be backed up on a separate bare-metal FreeNAS server and then to some cloud service to be determined.

For anyone that actually took the time to read all of that what are your thoughts? My main area of uncertainty is the best use of the 10G as it relates to iSCSI, VSA Sync, and vMotion. Also, being a FreeNAS person I automatically default to passthrough for storage hardware. Is there a compelling reason one way or another? I do like the idea of being able to use the normal LSI MegaRAID tools from the VSA VM.

I'm very open to alternate advice and can't wait to dig in and learn Starwind.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Jan 09, 2018 3:48 pm

The IBM M5015 would be PCI Passthrough to the VSA. The ~1TB array would be configured as an LSFS virtual disk for guest OS vmdks and the ~4TB array would be a standard flat disk for VMs that need extra storage.
Be sure to read the requirements to LSFS. In fact, on 1 TB of underlying storage you will be able to run not more than about 330 GB of LSFS devices because of the 2.5-3x overhead LSFS requires in terms of storage.
The S3500s would be used as L2 cache for the LSFS disk set to write-back since the S3500 has capacitor backed power loss protection.
StarWind VSAN offers only Write-Through mode for L2 cache.

As for 10Gbit cards, you'd better use them for iSCSI/Sync. You can also enable vMotion on the iSCSI network.

For storage hardware, you may stick to whichever option is suitable for you.
yaroslav (staff)
Staff
Posts: 2355
Joined: Mon Nov 18, 2019 11:11 am

Fri Jun 25, 2021 10:40 am

Could you kindly specify which article exactly?
Post Reply