Current HCI industry highest performance: 26M IOPS questions

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

jdeshin
Posts: 63
Joined: Tue Sep 08, 2020 11:34 am

Sat Jul 24, 2021 4:22 pm

Dear Yaroslav, please, check that I understand correctly:
In normal case we have following situation:
normal.jpg
normal.jpg (53.12 KiB) Viewed 1898 times
node1 and node2 have local connections (127.0.0.1) and partner iscsi connection.
nodeN does not host array, therefore it connected only by iscsi.
When sync switch goes down, StarWind assigned one node as "main" and disable iscsi target on not "main" node.
So, we have following result:
link.jpg
link.jpg (54.6 KiB) Viewed 1898 times
After the sync switch goes on, we will need to resync volume from "main" node.
When iscsi switch goes down, we will lost any iscsi communications, except local connections on nodes node1 and node2. The CSV on the nodeN goes unavailable and WSFC have to move VM to node, that have local link with the storage.
iscsi.png
iscsi.png (28.08 KiB) Viewed 1898 times
Is it correct?

Best regards,
Yury
yaroslav (staff)
Staff
Posts: 2338
Joined: Mon Nov 18, 2019 11:11 am

Thu Jul 29, 2021 7:43 am

Hi,

I learned that there are actually 2x PCIe 3.0 x16- each had its own NUMA.
Slots 1 and 5 are PCIe 3.0 x16
You need to have redundant switches for iSCSI. If a switch fails, another takes over.
In small deployments, where you have a point-to-point connection for Sync, you can use Sync for cluster-only communication. As I mentioned earlier, that should prevent cluster a split-brain. Alternatively, please connect Sync ports to both redundant switches, use the same subnet for them, probably, and set that cluster network for cluster-only communication.
Attachments
Server details
Server details
MicrosoftTeams-image.png (249.37 KiB) Viewed 1881 times
jdeshin
Posts: 63
Joined: Tue Sep 08, 2020 11:34 am

Sun Aug 01, 2021 3:43 pm

Hi Yaroslav,
So, one line of PCIe 3.0 have 8GT/s, x8 lanes have 7,88 GBytes/s ~64 Gbit/s and x16 lanes - 15,8 GBytes/s ~128 Gbit/s. You used two dual-port 100 Gbit/s cards. Therefore, if you plug your dual-port card in x16 line connector then you can get only 2x50 gbit/s full duplex or 2x100 gbit/s half duplex.
And my question was: why do you use 100 gbit/s cards?
As I can understand you didn't have any network redundancy in the test. Is it true?

Best regards,
Yury
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 27, 2024 10:39 am

1) You're absolutely correct, it's PCIe bus which is a limiting factor. We use 100 GbE NICs just because we need to unify our BoMs and stick with the smallest amount of the varying hardware components as possible. Software-only customers don't have this issue.

2) Nope, it's not what you should be pushing into production.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 27, 2024 10:39 am

== thread closed ==
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Locked