NIC speed for all-flash array

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
pmaa
Posts: 7
Joined: Wed Nov 28, 2018 11:34 am

Wed Nov 28, 2018 11:59 am

I'm speccing a 2-node vSphere hyperconverged setup with all flash storage. The SSD's will probably be 6x HGST SS530 in RAID-5. The drives should be able to reach at least 200 000 - 300 000 IOPS each with a mixed workload.

What NIC speed should I be looking at for iSCSI/HB and synchronization interfaces?
-Will 40 Gb be a bottleneck? What about with redundant interfaces and MPIO?
-Should I look at 100 Gb for the synchronization channel, or both iSCSI and sync?

From best practices guide I read that the synchronization channel should be twice the speed of iSCSI channel, but is it better to go straight to 100 Gb for both or have 40 Gb for iSCSI and 100 Gb for synchronization? What would you recommend in this case?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Nov 28, 2018 7:20 pm

100 Gbps NICs will be a better choice, for both links, i.e. iSCSI and Sync. Otherwise there is a probability of getting bottlenecked by the 40Gbps links.
At the same time, you might want to look at 50/56Gbps NICs like here, here, and here as an intermediate solution between the two you outlined.
pmaa
Posts: 7
Joined: Wed Nov 28, 2018 11:34 am

Wed Nov 28, 2018 8:55 pm

Thank you, I will probably go towards 100 Gb Mellanox adapters that Supermicro has for their servers. What are your thoughts on link redundancy? I'm thinking of using redundant 10 Gb links for client network and vSphere management network. If I use redundant links for iSCSI and sync too, that's 8 links together, four 10 Gb and four 100 Gb. Seems a lot but I guess that's just how it is if redundancy is wanted on all levels and iSCSI/sync are not done with just one link. Perhaps I could use the same NIC pair for vm network and vMotion network but I'll have to evaluate will there be enough bandwidth for all situations.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Wed Nov 28, 2018 9:22 pm

Redundant links for VM network and the management network are what I would do if I had a similar environment. You might need to consider redundancy for iSCSI traffic at the level of physical NICs, but as far as Sync link is concerned, you can operate with only one link used for that purpose. If this link is broken, you would only lose paths from the server that goes into the non-synced state, while the hosts will still have paths to storage from the node that stays synchronized.
pmaa
Posts: 7
Joined: Wed Nov 28, 2018 11:34 am

Wed Nov 28, 2018 10:36 pm

Thanks, that's something to think about. If I'm getting at least three 100 Gb links I might as well go for four since a dual interface card doesn't cost that much more than a single one. But for the vm and vmotion networks I could possibly use dual 10 Gb links and in normal use separate them to different interfaces. But then again a dual interface 10 Gb card isn't that much more either when considering the whole cost of the server. :lol:
pmaa
Posts: 7
Joined: Wed Nov 28, 2018 11:34 am

Thu Nov 29, 2018 8:23 am

I realized that the server chassis I'm considering only has one PCIe x16 slot which the 100 Gb card requires. Can I multipath in a way that in normal operation iSCSI and sync use the two 100 Gb interfaces and possibly overflow to a 40 Gb link, and then in a failure switch to the 40 Gb link completely? Are there any downsides to doing it this way?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Thu Nov 29, 2018 9:28 pm

To the best of my knowledge, you cannot do this with VMware. In this situation, I would say that one 100Gbps link for iSCSI and one 100Gbps link for Sync is most likely your choice.
Post Reply