Win2019 Hyper-V Cluster on VSAN on iSCSI physical storage

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
mrtomasz
Posts: 6
Joined: Wed Jan 27, 2021 1:59 am

Wed Jan 27, 2021 3:13 am

Hello,

Currently, there is a Windows Server 2019 Hyper-V cluster that consists of a 3 nodes connected to 2 storage appliances that contains HDDs and SSDs for cache and they are exposed via iSCSI 2x10GbE connections and .

Moving forward to optimize current solution, we'd like to use StarWind VSAN Free in order to build 3-node VSAN cluster on top of 3 storage appliances (+1) which will be serving clustered storage for the HV cluster.

Basically, we'd like to re-use existing storage appliances so they will be serving physical storage for each VSAN node instead of direct-attached disks in VSAN nodes.

Notes:
  • there are 2 locations - different buildings at same site, connected via Nx10GbE so bandwidth and latency not an issue
  • network is running on reduntant 10GbE connections
  • server NICs support RDMA, have full TCP/iSCSI offloads
  • storage appliances has 10x8TB HDD + 2x900GB SSD (for caching purposes), exposes itself over iSCSI/SMB and can do ODX
  • VSAN nodes would run Win2019 Std
  • HV nodes are running Win2019 Dcntr
  • VSAN cluster shall use synchronous replication
  • ultimate goal is performance, robustness and high availability (especially during nodes/storage failures)
High-level overview architecture:
Image

Now, the questions:
  1. what should be the method of exposing clustered storage from VSAN to HV
    • 1a) shall VSAN cluster serve SOFS SMB3 clustered storage to HV nodes as per your comment "microsoft-to-microsoft"?
      1b) or HV cluster shall be connected via old-school iSCSI to VSAN cluster?
  2. is VSAN Free license fully capable of serving this scenario (except GUI management panel) ?
  3. could you confirm network bandwidth requirements in this scenario (especially inter-VSAN ones) ?
  4. could you confirm network connections between VSAN nodes for HA/sync?
  5. is 32GB of RAM enough on VSAN nodes for this configuration (not planning to use LSFS)?
  6. what would be the best configuration for physical storage appliance in order to expose its storage to VSAN nodes?
    • 6a) each HDD and SSD separately to VSAN
      6b) each HDD separately to VSAN, but leave SSD for caching on storage side
      6c) RAID-10, single big LUN (40TB) to VSAN
      6d) RAID-10, multiple smaller LUNs to VSAN

Best Regards,
Tomasz
yaroslav (staff)
Staff
Posts: 2350
Joined: Mon Nov 18, 2019 11:11 am

Fri Jan 29, 2021 10:39 am

Hi,

Thank you for your question. First, I would recommend just putting a HA device on the drive intended for SSD cache. The thing is that L2 cache is working in Write-Through mode and boosts only reads. To boost both reads and writes, create a HA device there. See how to remove L2 cache https://knowledgebase.starwindsoftware. ... dance/661/ and more on StarWind VSAN caching operating principles https://knowledgebase.starwindsoftware. ... rinciples/.
what should be the method of exposing clustered storage from VSAN to HV
1a) shall VSAN cluster serve SOFS SMB3 clustered storage to HV nodes as per your comment "microsoft-to-microsoft"?
1b) or HV cluster shall be connected via old-school iSCSI to VSAN cluster?
SOFS induces an additional virtualization layer that makes the system more complicated.
Just present StarWind VSAN HA devices as CSVs to the cluster. See more at https://www.starwindsoftware.com/resour ... rver-2016/
is VSAN Free license fully capable of serving this scenario (except GUI management panel) ?
Yes, it is. It provides you GUI for managing StarWind VSAN. Pretty useful for 80% of tasks.
Please contact StarWind Support by filling in this form https://www.starwindsoftware.com/support-form for a demo. use this forum thread and 449849 as a reference.
could you confirm network bandwidth requirements in this scenario (especially inter-VSAN ones) ?
Network bandwidth seems to be good if latency is <5ms. See more at StarWind System requirements https://www.starwindsoftware.com/system-requirements. Please make sure to connect storage to all 3 compute nodes. You can use a switch for that. See how to connect storage and compute boxes here https://www.starwindsoftware.com/resour ... rver-2016/.
could you confirm network connections between VSAN nodes for HA/sync?
You need to have at least 2x physical links between locations. If you have those, that's great. Still, I would recommend direct links for sync.
is 32GB of RAM enough on VSAN nodes for this configuration (not planning to use LSFS)?
It purely depends on LSFS storage capacity. But it may be not enough. See more at https://knowledgebase.starwindsoftware. ... scription/.
what would be the best configuration for physical storage appliance in order to expose its storage to VSAN nodes?
6a) each HDD and SSD separately to VSAN
6b) each HDD separately to VSAN, but leave SSD for caching on storage side
6c) RAID-10, single big LUN (40TB) to VSAN
6d) RAID-10, multiple smaller LUNs to VSAN
Create RAIDs out of SSDs and HDDs separately and put HA devices on top of each. Please note that we recommend creating HA devices up to 6 TB (if that's possible). I/O will go through the controller anyway so one 40 TB LUN is still OK.
Post Reply