slugishness version 3.5.3 imagefiles

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Tue Jan 13, 2009 4:27 am

I have a simple SAN server set up with an LSI 84016E raid controller, a tyan dual opty board with a single dual core 2g opteron, 4g of ram, running server 2003 R2. I have a few targets set up on different arrays, using imagefiles. I am lucky to get10MB/s performance on any targets, whether it be from a VM hosted on the san, or a seperate ISCSI connection from a real windows box. I am setting all the targets to async mode(the default), and enabling all the write back features I can find in the controller, and on the drives. I am running 2 arrays: 4 x 150g 10k SATA raptors in raid 5, and 4 1TB 7200RPM sata drives in raid 5. the OS is on a seperate drive connected to the board's built in controller. What can be done to improve performance closer to where it should be?

I have tested the speed on the drives locally(not connected via iscsi) and am getting much closer to what i should(150+mb/400 burst)
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jan 13, 2009 4:56 pm

You have to have wire speed (100+ MB/sec). Start with applying TCP optimizations (check sticky topic on this forum). Then make sure you've put images on the separate slices (isolated from system page file and Windows root). Let us know about results at this point.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Tue Jan 13, 2009 6:02 pm

I have created seperate raid arrays for the image files(not 100% sure how my contrller does this on the backend but thats where its done). I am running pure gig(cisco gig 3550 switch). When I was trying disk bridge, I was actually getting decent performance but had other issues, and it is not flexible enough anyway for what I am trying to do. I am pretty sure it is not a network problem at this point. as I have verified I can get near gig speeds out of the network connection.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jan 13, 2009 10:08 pm

Image files should work the same way disk bridge does. Did you use pre-allocated images (not sparse)?
rpcblast wrote:I have created seperate raid arrays for the image files(not 100% sure how my contrller does this on the backend but thats where its done). I am running pure gig(cisco gig 3550 switch). When I was trying disk bridge, I was actually getting decent performance but had other issues, and it is not flexible enough anyway for what I am trying to do. I am pretty sure it is not a network problem at this point. as I have verified I can get near gig speeds out of the network connection.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Tue Jan 13, 2009 10:31 pm

yes i use pre allocated
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Wed Jan 14, 2009 1:48 am

sort of got it figured out. the same vlan that my vmware management interface is in had a route map applied in my cisco 3550, which was causing the switch to get killed whenever that massive IO came accross. I moved the management plane to another vlan and the problem went away
I am not getting the performance I would like, but it is an improvement. I am getting 35ishmb/sec so far, bursting to 70
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jan 14, 2009 4:22 pm

You should get wire speed. Apply TCP stack optimization settings and report here nttcp & iperf results. Then we'll continue.
rpcblast wrote:sort of got it figured out. the same vlan that my vmware management interface is in had a route map applied in my cisco 3550, which was causing the switch to get killed whenever that massive IO came accross. I moved the management plane to another vlan and the problem went away
I am not getting the performance I would like, but it is an improvement. I am getting 35ishmb/sec so far, bursting to 70
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Wed Jan 14, 2009 4:37 pm

i should have put MB not mb. I am getting about 30-50% wire speed. I have applied some of the optimizations, however my switch will only go to 2k for frames not 9k. I have not applied that change as of yet but I have all offloading enabled, and have a seperate VLAN set up just for ISCSI, in which both my ESXI servers and my file server have an interface in that vlan for connecting to the iscsi box.

I am using hdtach for HD performance, and the built in windows task manager for network utalization
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jan 14, 2009 8:20 pm

Well, with 2K frames you'll have big problems getting wire speed. Close to impossible :)

Tools you use are hardly usable for performance measurements. Use Intel I/O Meter instead of HDTach.
rpcblast wrote:i should have put MB not mb. I am getting about 30-50% wire speed. I have applied some of the optimizations, however my switch will only go to 2k for frames not 9k. I have not applied that change as of yet but I have all offloading enabled, and have a seperate VLAN set up just for ISCSI, in which both my ESXI servers and my file server have an interface in that vlan for connecting to the iscsi box.

I am using hdtach for HD performance, and the built in windows task manager for network utalization
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
rpcblast
Posts: 8
Joined: Tue Jan 13, 2009 4:16 am

Thu Jan 15, 2009 8:36 pm

Since most of the communcation will be with one ESXI server, I am thinking about just connecting the ISCSI box and the esxi box directly w/o bothering with a switch. I would use the second interface on the ESXI box for everything else. Is there any reason this wouldnt/shouldnt be done?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Jan 15, 2009 8:43 pm

Connecting with cross-over cable should work just fine!
rpcblast wrote:Since most of the communcation will be with one ESXI server, I am thinking about just connecting the ISCSI box and the esxi box directly w/o bothering with a switch. I would use the second interface on the ESXI box for everything else. Is there any reason this wouldnt/shouldnt be done?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply