Page 1 of 1

slugishness version 3.5.3 imagefiles

Posted: Tue Jan 13, 2009 4:27 am
by rpcblast
I have a simple SAN server set up with an LSI 84016E raid controller, a tyan dual opty board with a single dual core 2g opteron, 4g of ram, running server 2003 R2. I have a few targets set up on different arrays, using imagefiles. I am lucky to get10MB/s performance on any targets, whether it be from a VM hosted on the san, or a seperate ISCSI connection from a real windows box. I am setting all the targets to async mode(the default), and enabling all the write back features I can find in the controller, and on the drives. I am running 2 arrays: 4 x 150g 10k SATA raptors in raid 5, and 4 1TB 7200RPM sata drives in raid 5. the OS is on a seperate drive connected to the board's built in controller. What can be done to improve performance closer to where it should be?

I have tested the speed on the drives locally(not connected via iscsi) and am getting much closer to what i should(150+mb/400 burst)

Posted: Tue Jan 13, 2009 4:56 pm
by anton (staff)
You have to have wire speed (100+ MB/sec). Start with applying TCP optimizations (check sticky topic on this forum). Then make sure you've put images on the separate slices (isolated from system page file and Windows root). Let us know about results at this point.

Posted: Tue Jan 13, 2009 6:02 pm
by rpcblast
I have created seperate raid arrays for the image files(not 100% sure how my contrller does this on the backend but thats where its done). I am running pure gig(cisco gig 3550 switch). When I was trying disk bridge, I was actually getting decent performance but had other issues, and it is not flexible enough anyway for what I am trying to do. I am pretty sure it is not a network problem at this point. as I have verified I can get near gig speeds out of the network connection.

Posted: Tue Jan 13, 2009 10:08 pm
by anton (staff)
Image files should work the same way disk bridge does. Did you use pre-allocated images (not sparse)?
rpcblast wrote:I have created seperate raid arrays for the image files(not 100% sure how my contrller does this on the backend but thats where its done). I am running pure gig(cisco gig 3550 switch). When I was trying disk bridge, I was actually getting decent performance but had other issues, and it is not flexible enough anyway for what I am trying to do. I am pretty sure it is not a network problem at this point. as I have verified I can get near gig speeds out of the network connection.

Posted: Tue Jan 13, 2009 10:31 pm
by rpcblast
yes i use pre allocated

Posted: Wed Jan 14, 2009 1:48 am
by rpcblast
sort of got it figured out. the same vlan that my vmware management interface is in had a route map applied in my cisco 3550, which was causing the switch to get killed whenever that massive IO came accross. I moved the management plane to another vlan and the problem went away
I am not getting the performance I would like, but it is an improvement. I am getting 35ishmb/sec so far, bursting to 70

Posted: Wed Jan 14, 2009 4:22 pm
by anton (staff)
You should get wire speed. Apply TCP stack optimization settings and report here nttcp & iperf results. Then we'll continue.
rpcblast wrote:sort of got it figured out. the same vlan that my vmware management interface is in had a route map applied in my cisco 3550, which was causing the switch to get killed whenever that massive IO came accross. I moved the management plane to another vlan and the problem went away
I am not getting the performance I would like, but it is an improvement. I am getting 35ishmb/sec so far, bursting to 70

Posted: Wed Jan 14, 2009 4:37 pm
by rpcblast
i should have put MB not mb. I am getting about 30-50% wire speed. I have applied some of the optimizations, however my switch will only go to 2k for frames not 9k. I have not applied that change as of yet but I have all offloading enabled, and have a seperate VLAN set up just for ISCSI, in which both my ESXI servers and my file server have an interface in that vlan for connecting to the iscsi box.

I am using hdtach for HD performance, and the built in windows task manager for network utalization

Posted: Wed Jan 14, 2009 8:20 pm
by anton (staff)
Well, with 2K frames you'll have big problems getting wire speed. Close to impossible :)

Tools you use are hardly usable for performance measurements. Use Intel I/O Meter instead of HDTach.
rpcblast wrote:i should have put MB not mb. I am getting about 30-50% wire speed. I have applied some of the optimizations, however my switch will only go to 2k for frames not 9k. I have not applied that change as of yet but I have all offloading enabled, and have a seperate VLAN set up just for ISCSI, in which both my ESXI servers and my file server have an interface in that vlan for connecting to the iscsi box.

I am using hdtach for HD performance, and the built in windows task manager for network utalization

Posted: Thu Jan 15, 2009 8:36 pm
by rpcblast
Since most of the communcation will be with one ESXI server, I am thinking about just connecting the ISCSI box and the esxi box directly w/o bothering with a switch. I would use the second interface on the ESXI box for everything else. Is there any reason this wouldnt/shouldnt be done?

Posted: Thu Jan 15, 2009 8:43 pm
by anton (staff)
Connecting with cross-over cable should work just fine!
rpcblast wrote:Since most of the communcation will be with one ESXI server, I am thinking about just connecting the ISCSI box and the esxi box directly w/o bothering with a switch. I would use the second interface on the ESXI box for everything else. Is there any reason this wouldnt/shouldnt be done?