Hi All,
I am reasearching creating a Hyperconverged 2-Node Scenario with Hyper-V Server 2016 for my company.
I was looking at the s2d for microsoft 2016 but datacenter pricing is just too much for us at the moment. Now i am looking in to Starwind virtual san. I am reading the following guide.
https://www.starwindsoftware.com/resour ... erver-2016
The hardware we have available for this is:
2x dl380 gen 9
each server has:
1 xeon processor
128GB of memory
1x 240GB bootable NVMe ssd will be used for OS
5x sdd HPE enterprise 240 GB on a Smart Array P440ar
1x HP Ethernet 1GB 4-port 331i adapter
1x HP Ethernet 1Gb 4-port 331T adapter
1x HP ETHERNET 10GB 2-PORT 546FLR-SFP+ ADAPTER (would be directly connected because i don't have a 10GB switch)
Other servers
2 HP Proliant Gen8 1 Backup Domain Controller and one Hyper-v server containing an old database server and the Primary domain controller (which after succeeding with this project should be moved to the Hyperconverged cluster).
Network
1x HP Procurve 2510-48 switch
2x HP Procurve 2530-48G switches
The 2510-48 switch I only use for clients that couldnt be connected to the 2530 switches,it's an older switch. The server Nics are now devided over the two 2530 switches.
Extra storage
2x synology rs814+ (with 4 2tb wd red's) currently in HA from synology 1GB heartbeat and 3 bonded Nics i followed an older version of this document from synology https://www.synology.com/en-global/know ... nology_NAS
Currently the servers connect with iscsi to the synology HA ipaddress and they host the fileshare vhdx's and no virtual machines. The 3 1BG Nics are devided over the two 2530 switches.
After reading this article https://www.starwindsoftware.com/blog/s ... irtual-san i might want to incorporate this solution as well.
We use Altaro for the backups, they backup to two readynas devices 1 local and one on another location for offsite backups.
I am having trouble finding the best networking and storage solution.
I am thinking of the following solutions :
a)
Disregard the synology HA, the nic bonding and connect each rs814+ directly to the server with the 4 NICs
assign 1 10GB port for sync (Direct)
assign 1 10GB port for ISCSI Live Migration (Direct)
assign 1 1 GB NIC for heartbeat directly or through a switch
That leaves me with 3 1GB NICS on which i would use for switch embeded teaming. (management, Cluster,Vm's,Backups)
b)
Disregard the synology HA, the nic bonding and connect each rs814+ directly to the server with the 4 NICs
assign 1 10GB port for sync (Direct)
assign 1 10GB port for ISCSI Live Migration (Direct), heartbeat
assign the 4 Nics to switch embedded teaming for Management,CLuster,Vm's,Backups)
What would be the better option considering performance of vm's and backups or is there another option that would be better?
The storage solution now is
a raid 6 for the 5 ssd's (I need the space and raid 10 is only in even amounts i think)
and rackstations are Raid 5 with data protection (i think it is synologies specifiek raid)
I dont want to loose to much storage space on the SSd's but also want them to be fault tolerant hence the raid 6 configuration now.
Is it possible to leave the raid configuration for for both the ssd's and the synology or would it be better to change this?
Is it possible to create two tiers 1 SSD and 1 (say rackstation) or should it be another configuration?
What about the l1/l2 cache?
I would love to hear your input on this
Thanks in advance
Ilano
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software