The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
The sync for an indivdual target was only about 10-15% of a 10GbE link and now it is 25-30%. It also seems to be scaling linearly. 2 targets writing simultaneously is running 50-60% of the sync channel.kmax wrote:Thanks for the comments. Can you expand on the performance improvements of 5.7?
As an aside, 23TB of targets in my mind is incredible. Can you describe your disk subsystem and network interfaces on that?
rchisholm wrote:I just finished upgrading my 5.6 HA to the 5.7 Beta. Everything went smoothly. Testing is going well so far with HA write speeds a little more than twice the speed of 5.6 with no other changes tested yet.
rchisholm wrote:The sync for an indivdual target was only about 10-15% of a 10GbE link and now it is 25-30%. It also seems to be scaling linearly. 2 targets writing simultaneously is running 50-60% of the sync channel.kmax wrote:Thanks for the comments. Can you expand on the performance improvements of 5.7?
As an aside, 23TB of targets in my mind is incredible. Can you describe your disk subsystem and network interfaces on that?
Each of the boxes are dual quad core 2.4GHz 5600 series Xeons with 72GB DDR3 RAM. The OS are on 500GB SAS RAID 1's. The storage in each StarWind server is currently 48 Seagate Constellation 2TB SAS drives connected to 2 Areca 1880 24 port cards. 24 are configured as a RAID 60 on one of the controllers, and there are 2 sets of 12 drive RAID 6's on the other controller. There are also 2 LSI 9280-8e controllers in each box with a 24 drive LSI 620J JBOD loaded with 4 Intel X25-E SSD's for cachecade and 20 146 15K Seagate SAS drives each. Each of the boxes has 2 HP dual 10GbE SFP+ NIC's. 2 ports are for iSCSI that are each connected to an HP E6600 24 port SFP+ switch, and the other 2 for sync channels that are connected directly.
I tested a bunch of different RAID configurations such as RAID 5, 10, 6, and 60, but the limitation on speed so far is the iSCSI. Running the faster writing RAID's really hasn't made any difference since the RAID 6 and 60's are no where near the bottlenecks of the systems right now. There is about 80TB of 7.2K RAID's in each box, and 4.5TB of 15K RAID's in each box. I'm actually going to move the 4 2U LSI JBOD's to be DAS storage for our next build of our main SQL server since I'm really not getting a noticeable performance difference between them and the much cheaper 7.2K's that are inside the 9U Chenbro cases. The speed I'm getting with version 5.7 with the cheaper drives in RAID 6 and 60's is plenty for the VM's that these SAN's will be used for.
anton (staff) wrote:You should see 80-90% link speed saturation even on a single target under heavy load. Could you please run some test / simulate load? Also latency for 5.6 Vs. 5.7 is subject of a great interest.
That's a confirmation of my theory... Cheap components with maximum redundancy
rchisholm wrote:The sync for an indivdual target was only about 10-15% of a 10GbE link and now it is 25-30%. It also seems to be scaling linearly. 2 targets writing simultaneously is running 50-60% of the sync channel.kmax wrote:Thanks for the comments. Can you expand on the performance improvements of 5.7?
As an aside, 23TB of targets in my mind is incredible. Can you describe your disk subsystem and network interfaces on that?
Each of the boxes are dual quad core 2.4GHz 5600 series Xeons with 72GB DDR3 RAM. The OS are on 500GB SAS RAID 1's. The storage in each StarWind server is currently 48 Seagate Constellation 2TB SAS drives connected to 2 Areca 1880 24 port cards. 24 are configured as a RAID 60 on one of the controllers, and there are 2 sets of 12 drive RAID 6's on the other controller. There are also 2 LSI 9280-8e controllers in each box with a 24 drive LSI 620J JBOD loaded with 4 Intel X25-E SSD's for cachecade and 20 146 15K Seagate SAS drives each. Each of the boxes has 2 HP dual 10GbE SFP+ NIC's. 2 ports are for iSCSI that are each connected to an HP E6600 24 port SFP+ switch, and the other 2 for sync channels that are connected directly.
I tested a bunch of different RAID configurations such as RAID 5, 10, 6, and 60, but the limitation on speed so far is the iSCSI. Running the faster writing RAID's really hasn't made any difference since the RAID 6 and 60's are no where near the bottlenecks of the systems right now. There is about 80TB of 7.2K RAID's in each box, and 4.5TB of 15K RAID's in each box. I'm actually going to move the 4 2U LSI JBOD's to be DAS storage for our next build of our main SQL server since I'm really not getting a noticeable performance difference between them and the much cheaper 7.2K's that are inside the 9U Chenbro cases. The speed I'm getting with version 5.7 with the cheaper drives in RAID 6 and 60's is plenty for the VM's that these SAN's will be used for.