Thu Jun 25, 2015 3:41 pm
First let me start by saying this is my first experience with clustering, CSVs, vSANs, StarWind and iSCSI so if I am missing something obvious, please forgive my ignorance.
I am testing out StarWind v8 build 8116. I have two brand new identical Supermicro servers, each with the following configuration;
• Supermicro X9DRH-iF Motherboard
• Dual Intel Xeon E5-2650X2 Processors
• 256GB RAM
• 16x Seagate ST600MM0026 SAS3 10K 600GB Hard Drives (Main Array)
• 2x Intel Pro 2500 SSD (for Hyper-V OS and some SSD L2 Cache)
• Direct Connect Backplane
• 2x LSI 9207-8i HBA
• 1x Intel X710DA2 10GbE Dual Port Network Adapter for Storage Network
• 2x Intel i350T Dual Port 1GbE Network Adapters (1, integrated, 1 option card) for Production and Management Network
• StarWind vSAN Standard
I have the 2x SSD, rear mounted and mirrored for the W2K12R2 OS and for L2 Cache for StarWind
I have the 16x SAS drives in a single Storage Pool and I have created two Windows Storage Spaces virtual disks of about 2TB each. These virtual disks are called CSV1 and CSV2
I have direct connect Twinax patch cables to connect the two servers, one for each 10GbE Port.
I set up VLANS on each 10GbE port as follows;
• Server 1, Port 1 - SYNC1vLAN at 172.21.110.1 - Connects to Server 2, Port 1, IP 172.21.110.2
• Server 1, Port 1 - iSCSI1vLAN at 172.21.210.1 - Connects to Server 2, Port 1, IP 172.21.210.2
• Server 1, Port 2 - SYNC2vLAN at 172.21.111.1 - Connects to Server 2, Port 1, IP 172.21.111.2
• Server 1, Port 2 - iSCSI2vLAN at 172.21.211.1 - Connects to Server 2, Port 1, IP 172.21.211.2
I ran iPERF on all 4 IP sets of the 10GbE cards and got 9.7Gb/s for Port 1 and 9.37Gb/s on Port 2
I then set up StarWind on Server 1 with a 1g virtual hard drive for the Witness device and a 2TB virtual hard drive for the CSV1 device.
After setting up the 2TB virtual hard drive, I launched Replication Manager and added a replica of CSV1 on Server 2, specifying the VLAN IP addresses. This is where I noticed the first issue. It took 5 hours and 40 minutes to create the replica.
I ran IOMeter on the Server1 CSV (Storage Spaces) from Server 1 and got around 5K IOPS at 100ms with 32 threads, 16 Outstanding I/Os, running tests to simulate SQL workload.
I ran IOMeter on the Server2 CSV (Storage Spaces) from Server 2 and got around 4.9K IOPS at 109ms with 32 threads, 16 Outstanding I/Os, running tests to simulate SQL workload.
Using StarWind, I set up a 75GB RAMDisk device on Server 1 and then ran IOMeter on Server 1 targeting the RAMDisk, I got about 125K IOPsat 4ms with the same test and number of threads used above.
I then used the Windows iSCSI Initiator on Server 2 to connect to the StarWind RAMDisk on Server 1. I specified the Storage LAN IP addresses in the Discovery Portal area of the Discovery Tab in iSCSI Initiator. I then ran IOMeter from Server 2, targeting the RAMDisk from Server 1 using the same number of threads and tests as above. In this test, I only got about 50K IOPS at 200ms. I watched the Resource Monitor on Server 2 and I was only seeing traffic on the iSCSI1vLAN and well below 50% utilization.
So, I seem to be having a problem getting the Intel X710DA2 10GbE adapters to use multipath with iSCSI to give me the performance I am expecting. I am not sure if this is an issue with StarWind or the Intel 10GbE Adapters.
I need help to verify that I have iSCSI set up properly for the Intel adapters so that they use multipath.
Any suggestions or ideas would be greatly appreciated.
Thanks in advance,
Dave