2node all-NVMe design

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
User avatar
frufru
Posts: 19
Joined: Thu Aug 06, 2015 12:53 pm

Tue Aug 17, 2021 11:38 am

Hello guys,

for our old 2 node cluster is time for upgrade and i looking for help in now days perspective.

2x node HCI
Each node:
2x CPU AMD EPYC
4x 7,6TB NVMe PCI Gen4 in RAID10
Direct connection without switches (Storage + Sync)
Hyper-V Cluster (200x VMs)
Starwind two-way synchronization

Q:
What adapters to use for direct connection? 100Gb Speed? Chipset/Vendor?
Two single port or one dual port card, or two dualport card?
iSCSI, iSER, NVMe-Of?

Storage latency is very important for us.

Thanks for ideas, frufru
jdeshin
Posts: 63
Joined: Tue Sep 08, 2020 11:34 am

Wed Aug 18, 2021 3:39 am

Hello, frufru.
I think you can use network hardware the same as in following example:
https://www.starwindsoftware.com/nvme-o ... me-cluster

Best regards,
Yury
User avatar
frufru
Posts: 19
Joined: Thu Aug 06, 2015 12:53 pm

Wed Aug 18, 2021 10:44 am

Hello,

i picked Mellanox ConnectX-6 dualport 100Gb card. Is iSER the best oprtion for production according to https://www.starwindsoftware.com/resour ... wind-iser/ ? Does it work with direct cable connect between servers (SYNC+Storage) ?

What NVMe-oF https://www.starwindsoftware.com/starwi ... -of-target ? Does it work with direct cable connect between servers (SYNC+Storage) ?

Best Regards,
frufru
yaroslav (staff)
Staff
Posts: 2333
Joined: Mon Nov 18, 2019 11:11 am

Wed Aug 18, 2021 11:08 am

Thanks for your help Yury.

Welcome to the forum fufru.
Please note that we cannot advise on hardware-related questions. Designing the setups for end-users' needs and tailoring the specs is only available for StarWind HCA users.
I would recommend redundant NICs as they help to avoid split-brain https://www.starwindsoftware.com/blog/w ... o-avoid-it.
In all setups, we use iSCSI. iSER is the test feature as well as NVMe-oF. But, iSCSI is taking a good part of the performance by design as this protocol just was not designed for the storage as fast as NVMe. As a workaround, you can try assigning 4 127.0.0.1 and 4 partner connections per iSCSI link.
Also, we do not recommend teaming or mixing iSCSI and sync. See more at https://www.starwindsoftware.com/best-p ... practices/
User avatar
frufru
Posts: 19
Joined: Thu Aug 06, 2015 12:53 pm

Wed Aug 18, 2021 1:52 pm

Hello Yaroslav,

thanks for replies.

What will be better for us in our scenario?
1)
1x100Gb - direct connect for SYNC
1x100Gb - direct connect for iSCSI

2)
2x25Gb - direct connect for SYNC
2x25Gb - direct connect for iSCSI

I think 100Gb will be over-kill with iSCSI, but more slower links can perform better and even more redundancy.
yaroslav (staff)
Staff
Posts: 2333
Joined: Mon Nov 18, 2019 11:11 am

Thu Aug 19, 2021 7:54 am

By 2x25 GBE, do you mean 2 separate non-teamed links?
In terms of redundancy, this scenario is better.
Performance-wise though, 100 GB seems a better choice.

Please note though that a 25 GBE link may be sufficient if you connect NVMe disks over iSCSI. Please test that in your lab to avoid network bandwidth underutilization.
Post Reply