Synology + StarWind + Hyper-V
Posted: Fri Oct 13, 2017 10:53 am
Hi,
We currently have a 3 node Hyper-V Failover Cluster. Each node is connected with a 1Gb LAN to the office network, and with a second 1 Gb LAN to an isolated network, using iSCSI, to a Synology DS1813+, with 6x4TB drives in RAID10 and 2x256GB SSD as SSD cache. This setup is working fine, and the performance is not bad, but I know it could be better.
Now I want to add a second Synology, same model and disks, to add storage redundancy in case of a hardware fail. I was almost convinced to create a Synology HA Cluster, although I don't like that having a passive server, doing nothing until something bad happens to the active one.
Then I discovered the new Microsoft Storages Spaces Direct, but then I discovered that I need each node to have a Datacenter license.
Finally I discovered StarWind Virtual SAN. Lot of good reviews. And I found this series of articles:
https://www.starwindsoftware.com/blog/s ... irtual-san
https://www.starwindsoftware.com/blog/s ... ile-system
https://www.starwindsoftware.com/blog/s ... r-duration
These articles are more or less what I want to do, but before investing in the (fairly large) amount of time I will need to rebuild our setup, I want to ask if my idea is good enough, or if I need to change anything:
1. Setup both Synologys as they're now: 6x4TB HDD in RAID10 + 2x256GB SSD as SSD Cache. 2 iSCSI-LUN (Block-Level), one for Data and another for Quorum.
2. Connect each of them to one of the nodes, using as many direct connections (no switches) as I can, 4 if possible. I'll need to buy a couple of 4x 1Gbe network cards.
3. Connect both storage nodes using a single 10Gbe connections. I'll need to buy a 10Gbe network card, and a direct cable between them. I dont know if RJ45 or SFP+.
4. Enable iSCSI Multipath IO, so I can reach a "theorical" 8x1Gb connection between each node and the iSCSI data. Synologys aren't going to be able to serve data at this speed.
5. Mount StarWind HA device using iSCSI: using loopback for the local connection and the 10Gbe connection to the remote node.
My doubts are:
- Is it better to use the Synology SSD Cache, or use the SSD for something else, like StarWind L2 Cache or using them on some old workstation with regular HDD?
- Do you think using a 4 connections between the Synologys and the nodes is too much? I think than using a 3 connection, using one of the integrated NICs on the server and a 2 NICs card could be enough (and cheaper).
- Could it be better to mount the StarWind HA device using SMB3? I think this is also supported, but I may be wrong.
- Will I be able to use the third Hyper-V node if it has no iSCSI connection?
The Hyper-V Failover Cluster is currently hosting about 10 VMs, used mostly for development, with currently about 4GB of required storage data. There are some MySQL, Subversion, build server, web servers, CRM, etc. I know I don't need a super-fast setup, but I want to use what I currently have as better as possible.
Thanks,
Enric
We currently have a 3 node Hyper-V Failover Cluster. Each node is connected with a 1Gb LAN to the office network, and with a second 1 Gb LAN to an isolated network, using iSCSI, to a Synology DS1813+, with 6x4TB drives in RAID10 and 2x256GB SSD as SSD cache. This setup is working fine, and the performance is not bad, but I know it could be better.
Now I want to add a second Synology, same model and disks, to add storage redundancy in case of a hardware fail. I was almost convinced to create a Synology HA Cluster, although I don't like that having a passive server, doing nothing until something bad happens to the active one.
Then I discovered the new Microsoft Storages Spaces Direct, but then I discovered that I need each node to have a Datacenter license.
Finally I discovered StarWind Virtual SAN. Lot of good reviews. And I found this series of articles:
https://www.starwindsoftware.com/blog/s ... irtual-san
https://www.starwindsoftware.com/blog/s ... ile-system
https://www.starwindsoftware.com/blog/s ... r-duration
These articles are more or less what I want to do, but before investing in the (fairly large) amount of time I will need to rebuild our setup, I want to ask if my idea is good enough, or if I need to change anything:
1. Setup both Synologys as they're now: 6x4TB HDD in RAID10 + 2x256GB SSD as SSD Cache. 2 iSCSI-LUN (Block-Level), one for Data and another for Quorum.
2. Connect each of them to one of the nodes, using as many direct connections (no switches) as I can, 4 if possible. I'll need to buy a couple of 4x 1Gbe network cards.
3. Connect both storage nodes using a single 10Gbe connections. I'll need to buy a 10Gbe network card, and a direct cable between them. I dont know if RJ45 or SFP+.
4. Enable iSCSI Multipath IO, so I can reach a "theorical" 8x1Gb connection between each node and the iSCSI data. Synologys aren't going to be able to serve data at this speed.
5. Mount StarWind HA device using iSCSI: using loopback for the local connection and the 10Gbe connection to the remote node.
My doubts are:
- Is it better to use the Synology SSD Cache, or use the SSD for something else, like StarWind L2 Cache or using them on some old workstation with regular HDD?
- Do you think using a 4 connections between the Synologys and the nodes is too much? I think than using a 3 connection, using one of the integrated NICs on the server and a 2 NICs card could be enough (and cheaper).
- Could it be better to mount the StarWind HA device using SMB3? I think this is also supported, but I may be wrong.
- Will I be able to use the third Hyper-V node if it has no iSCSI connection?
The Hyper-V Failover Cluster is currently hosting about 10 VMs, used mostly for development, with currently about 4GB of required storage data. There are some MySQL, Subversion, build server, web servers, CRM, etc. I know I don't need a super-fast setup, but I want to use what I currently have as better as possible.
Thanks,
Enric