Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)
-
ashmite
- Posts: 2
- Joined: Wed May 03, 2017 2:34 pm
Wed May 03, 2017 8:56 pm
I have two new SuperMicro servers with Virtual SAN Free loaded (v8 latest build). Both servers have 12x 8TB HGST 7.2K RPM SATA drives. (The vendor sent SATA instead of SAS before I realized it)
They have 2x Intel x710 10 Gig fiber NICs for client access and sync, and 1x 1 Gig copper NIC for heartbeat.
Data sync is only working at about 300-400 Mbps across the 10 Gig link. There is a full resync going and it is projected to take 15 days
I have tested the storage system and get 500 MBps, and I have tested the 10 Gig cards (jumbo frames enabled) and I got 2.5 Gps with "small" test files of 5 GB.
Any thoughts on where to start looking for the bottleneck? I did discover that I didn't configure the Loopback adapter for the iSCSI, but would this cause the problem? If so, how best to get that into the mix without knocking the storage offline? I already had a power accident on one node while the second was down for maintenance, and this is what caused the full resync.
I'm also thinking about rehoming the VMs off of the front end Hyper-V boxes to local storage and rebuilding the file servers cluster. Thoughts?
Thx!!
--Joel
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Thu May 04, 2017 7:19 pm
Start with a network. You should get wire speeds with X(L)710 (crappy card BTW, we had pulled them from all the production workloads here) w/out saturating CPU. Before you'll get 900-950 MB/sec don't move further with any tests.
P.S. If you can get Mellanox CX4, they a) rock solid and b) we know how to use RDMA on them (StarWind does iSER for backbone).
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
ashmite
- Posts: 2
- Joined: Wed May 03, 2017 2:34 pm
Wed May 17, 2017 6:47 pm
Hi Anton,
I was able to resolve. It turned out we did not get battery backed up cache for our RAID cards. It was causing miserable write rates. Adding the BBU cache and swapping the drives to SAS seems to have addressed the issue.
I created a 1TB virtual drive as a test, and the sync between Nodes 1 and 2 it is averaging 7.0 Gbps.
I'm pretty happy now!! Thx!!
-
KPS
- Posts: 51
- Joined: Thu Apr 13, 2017 7:37 am
Thu May 25, 2017 4:58 am
anton (staff) wrote:Start with a network. You should get wire speeds with X(L)710 (crappy card BTW, we had pulled them from all the production workloads here) w/out saturating CPU. Before you'll get 900-950 MB/sec don't move further with any tests.
P.S. If you can get Mellanox CX4, they a) rock solid and b) we know how to use RDMA on them (StarWind does iSER for backbone).
Hi Anton!
I am getting a bit nervous, now. Next week, I need to install my first Starwind-Cluster and the hardware is already lying next to me - with one Intel XL710 in each node as replication link. What kind of problems did you have? Should I have sorrows about the installation? I have ordered "Premium Support", so I will install it assisted with your team, but I do not want to run in any issues...
Regards,
KPS
-
Ivan (staff)
- Staff
- Posts: 172
- Joined: Thu Mar 09, 2017 6:30 pm
Fri May 26, 2017 2:07 pm
Hello KPS,
I believe there will be no issues with hardware and StarWind configuration. During installation assistance, we are checking all the settings above & beyond to make sure that configuration fits StarWind Best Practices.
Please do not hesitate to contact StarWind support once the hardware will be available to do the installation