Starwind Vsan, installation troubles
Posted: Fri Oct 06, 2017 3:19 pm
(2) servers. Connected via 1 20gb infiniband link (10.100.100.4), 1 10gb ethernet link (172.16.9.4), and 1 1gb ethernet links (172.16.9.24).
ideally, replication traffic will flow over the infiniband and the ethernet link, with heartbeat going over the gigabit.
1x 256Gb NVMe drive for flash cache
1x 8Tb Raid 10 drive array (4 drives) from an LSI 9271 with 1GB of cache memory
Following the 2 node converged setup:
1. managed to get 2 of 4 volumes replicated, larger volumes (3-5Tb) never replicated
2. Re-configuring multiple times, the hearbeat networks (gigabit and 10gb) never work, always have a red X through it. I verified nothing else was using port 3260
3. Performance via local loopback iSCSI connection was decent read speed, but horrible write speed
4. Syncing an empty 2tb volume seems to take forever, the software stated 3hrs initially before I went to bed, then I wake up this morning to have it not synced.
10/6 10:51:37.895 1a28 C[6b1], XPT_UP: T3.
10/6 10:51:40.981 20a4 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:51:40.981 20a4 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:51:40.983 20a4 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.6:3260 portal from the 0.0.0.0 interface.
10/6 10:51:40.983 20a4 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:51:45.985 20a4 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:51:47.391 ea0 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:51:47.391 ea0 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:51:47.393 ea0 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.26:3260 portal from the 0.0.0.0 interface.
10/6 10:51:47.393 ea0 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:51:52.394 ea0 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:16.209 1a28 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:16.210 1a28 iSERs: iSER: Accept: using Ird = 0, Ord = 16.
10/6 10:53:16.210 1a28 iSERs: IND2Connector::Accept failed with c000009a
10/6 10:53:16.210 1a28 iSERs: Socket::Accept failed with c000009a
10/6 10:53:16.210 1a28 Srv: Accepted iSER connection from 172.16.9.6:45057 to 172.16.9.4:3260. (Id = 0x6b2)
10/6 10:53:16.210 1a28 S[6b2]: Session (000002190AB9BB00)
10/6 10:53:16.210 1a28 C[6b2], FREE: Event - CONNECTED.
10/6 10:53:16.210 1a28 C[6b2], XPT_UP: T3.
10/6 10:53:21.715 20a4 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:53:21.715 20a4 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:53:21.718 20a4 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.6:3260 portal from the 0.0.0.0 interface.
10/6 10:53:21.718 20a4 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:53:26.719 20a4 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:28.277 ea0 iSERs: IND2Connector::Connect failed with c00000b5
*** Update 1 ***
Created a 1TB drive, full provisioning, mounted locally and the speeds are comparable, so it looks like the thin provisioning with dedup is not a good thing to use for your write speeds.
ideally, replication traffic will flow over the infiniband and the ethernet link, with heartbeat going over the gigabit.
1x 256Gb NVMe drive for flash cache
1x 8Tb Raid 10 drive array (4 drives) from an LSI 9271 with 1GB of cache memory
Following the 2 node converged setup:
1. managed to get 2 of 4 volumes replicated, larger volumes (3-5Tb) never replicated
2. Re-configuring multiple times, the hearbeat networks (gigabit and 10gb) never work, always have a red X through it. I verified nothing else was using port 3260
3. Performance via local loopback iSCSI connection was decent read speed, but horrible write speed
4. Syncing an empty 2tb volume seems to take forever, the software stated 3hrs initially before I went to bed, then I wake up this morning to have it not synced.
10/6 10:51:37.895 1a28 C[6b1], XPT_UP: T3.
10/6 10:51:40.981 20a4 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:51:40.981 20a4 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:51:40.983 20a4 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.6:3260 portal from the 0.0.0.0 interface.
10/6 10:51:40.983 20a4 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:51:45.985 20a4 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:51:47.391 ea0 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:51:47.391 ea0 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:51:47.393 ea0 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.26:3260 portal from the 0.0.0.0 interface.
10/6 10:51:47.393 ea0 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:51:52.394 ea0 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:16.209 1a28 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:16.210 1a28 iSERs: iSER: Accept: using Ird = 0, Ord = 16.
10/6 10:53:16.210 1a28 iSERs: IND2Connector::Accept failed with c000009a
10/6 10:53:16.210 1a28 iSERs: Socket::Accept failed with c000009a
10/6 10:53:16.210 1a28 Srv: Accepted iSER connection from 172.16.9.6:45057 to 172.16.9.4:3260. (Id = 0x6b2)
10/6 10:53:16.210 1a28 S[6b2]: Session (000002190AB9BB00)
10/6 10:53:16.210 1a28 C[6b2], FREE: Event - CONNECTED.
10/6 10:53:16.210 1a28 C[6b2], XPT_UP: T3.
10/6 10:53:21.715 20a4 iSERs: IND2Connector::Connect failed with c00000b5
10/6 10:53:21.715 20a4 iSERs: iSerDmSocket::Connect failed with c00000b5!
10/6 10:53:21.718 20a4 Sw: *** ConnectPortal: Unable to connect to the 172.16.9.6:3260 portal from the 0.0.0.0 interface.
10/6 10:53:21.718 20a4 HA: CNIXInitiator::MountTarget: unable to mount the target (1627)!
10/6 10:53:26.719 20a4 iSERs: Created QueuePair (recv 264, init 1056, cq 1320, group 0, affinity 0xf).
10/6 10:53:28.277 ea0 iSERs: IND2Connector::Connect failed with c00000b5
*** Update 1 ***
Created a 1TB drive, full provisioning, mounted locally and the speeds are comparable, so it looks like the thin provisioning with dedup is not a good thing to use for your write speeds.