Researching starwind virtual san and have some questions

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
ilano1972
Posts: 2
Joined: Sat Aug 11, 2018 5:19 pm

Sat Aug 11, 2018 10:33 pm

Hi All,

I am reasearching creating a Hyperconverged 2-Node Scenario with Hyper-V Server 2016 for my company.
I was looking at the s2d for microsoft 2016 but datacenter pricing is just too much for us at the moment. Now i am looking in to Starwind virtual san. I am reading the following guide.
https://www.starwindsoftware.com/resour ... erver-2016
The hardware we have available for this is:
2x dl380 gen 9
each server has:
1 xeon processor
128GB of memory
1x 240GB bootable NVMe ssd will be used for OS
5x sdd HPE enterprise 240 GB on a Smart Array P440ar
1x HP Ethernet 1GB 4-port 331i adapter
1x HP Ethernet 1Gb 4-port 331T adapter
1x HP ETHERNET 10GB 2-PORT 546FLR-SFP+ ADAPTER (would be directly connected because i don't have a 10GB switch)

Other servers
2 HP Proliant Gen8 1 Backup Domain Controller and one Hyper-v server containing an old database server and the Primary domain controller (which after succeeding with this project should be moved to the Hyperconverged cluster).

Network
1x HP Procurve 2510-48 switch
2x HP Procurve 2530-48G switches

The 2510-48 switch I only use for clients that couldnt be connected to the 2530 switches,it's an older switch. The server Nics are now devided over the two 2530 switches.

Extra storage
2x synology rs814+ (with 4 2tb wd red's) currently in HA from synology 1GB heartbeat and 3 bonded Nics i followed an older version of this document from synology https://www.synology.com/en-global/know ... nology_NAS
Currently the servers connect with iscsi to the synology HA ipaddress and they host the fileshare vhdx's and no virtual machines. The 3 1BG Nics are devided over the two 2530 switches.

After reading this article https://www.starwindsoftware.com/blog/s ... irtual-san i might want to incorporate this solution as well.

We use Altaro for the backups, they backup to two readynas devices 1 local and one on another location for offsite backups.

I am having trouble finding the best networking and storage solution.

I am thinking of the following solutions :
a)
Disregard the synology HA, the nic bonding and connect each rs814+ directly to the server with the 4 NICs
assign 1 10GB port for sync (Direct)
assign 1 10GB port for ISCSI Live Migration (Direct)
assign 1 1 GB NIC for heartbeat directly or through a switch

That leaves me with 3 1GB NICS on which i would use for switch embeded teaming. (management, Cluster,Vm's,Backups)

b)
Disregard the synology HA, the nic bonding and connect each rs814+ directly to the server with the 4 NICs
assign 1 10GB port for sync (Direct)
assign 1 10GB port for ISCSI Live Migration (Direct), heartbeat
assign the 4 Nics to switch embedded teaming for Management,CLuster,Vm's,Backups)

What would be the better option considering performance of vm's and backups or is there another option that would be better?

The storage solution now is
a raid 6 for the 5 ssd's (I need the space and raid 10 is only in even amounts i think)
and rackstations are Raid 5 with data protection (i think it is synologies specifiek raid)

I dont want to loose to much storage space on the SSd's but also want them to be fault tolerant hence the raid 6 configuration now.

Is it possible to leave the raid configuration for for both the ssd's and the synology or would it be better to change this?

Is it possible to create two tiers 1 SSD and 1 (say rackstation) or should it be another configuration?
What about the l1/l2 cache?

I would love to hear your input on this

Thanks in advance
Ilano
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Aug 14, 2018 10:10 am

Hello Ilano,
Thank you for your question. Here is what we came up with:
Both dl380 servers connected directly via 10 GB iSCSI and 10 GB Sync ports.
You can go with the option b. You can also put two switches into the cluster to avoid a single point of failure and achieve network redundancy.
As to 2x Synology rs814+, try connecting them directly to 2x dl380 servers over iSCSI as described in the StarWind blog article you’ve already mentioned.
Under such a configuration, you can create two separate storage pools for your SSDs and Synology to get the most of both storage types.
As per cache settings, you can go with setting the L1 cache for Synology drives and no cache for SSDs. You can find more information about cache there.

Additionally, consider rebuilding the SSDs RAID from 6 to 5 as this will free up one drive in the array and give you higher write performance. Plus, you’ll have one disk failure redundancy in the array and a replication on the StarWind layer which means you can tolerate the entire server failure.
ilano1972
Posts: 2
Joined: Sat Aug 11, 2018 5:19 pm

Tue Aug 14, 2018 5:56 pm

Hi Oleg,

Thank you for your answer. I have been reading allmost every evening about the Starwind Virtual SAN. When we bought the servers we did do the HP Storevirtual solution but it was not really a reliable solution. As for the synology I allways thought I could get more out of it, the article about the flaky perfomance of Synology HA was the first thing to get me to Starwind.
I will study your links and apply your suggestions to my plan and when done I'll post it here including my powershell script, so maybe someone else can benefit from it. And to see if I did it correct offcourse.

regards,

Ilano
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Aug 14, 2018 6:22 pm

Great, keep the community updated. Your experience may be precious to other members.
Post Reply