LTO9 Slow Speed and performance settings

Tape drive and auto-loader redirector over iSCSI

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Jimbob
Posts: 2
Joined: Thu Jan 11, 2024 11:28 am

Thu Jan 11, 2024 1:36 pm

Hi there

I have an LTO9 tape drive on a MS Hyper-V Host running Tape Redirector 8.0.15159.0
Installed and configured as per your technical papers, connected to the Hyper-V VM MS-server and phsycally working OK, initially impressions - a really great product and very pleased.

Upon actual usage testing though, I have as seems to be a common issue, very very slow speed issues with Tape Director iSCSI writing to the tape
After days of wasted time finding dribs and snippets of information, it seems shame that you don't provide a more user friendly interface or more informataion for more modern equipment and devices.

Server host is HPE Proliant Gen10 with dedicated fast 12G raided drives using multiple 10Gb NICs, 1x for local network access and 1x dedicated for iSCSI connection on both server host and windows server virtual,
iSCSI connection has dedicated 10Gb NIC to 10Gb NIC with no other network devices.

I see a maximum backup to tape transfer speed of 250mbps (megabits)
I expect to see at least 300MBps (megabytes) backup write speed

It all works great, its just slow
A simple 1.5Tb file backup to tape job without compresssion, takes 18hours to write, which isn't acceptable.


TROUBLESHOOTING workings:

I've not installed MPIO assuming single session of backup job to Tape with single dedicated NIC is correct for this setup
Turn off MS Defender
Installed StarWind iSCSI Accelerator on both Host and VM
Test manual copy of backup job data from VM to Host over iSCSI 10Gb nic, results in 800MBps and recopy following confirms raid controller cache working with 1300MBps transfer.
Check with HPE who confirm native write speed of LTO9 drive is 350MBps so physical network bandwidth and HDD speeds are more than acceptable
LTO9 is on dedicated SAS Controller
All LTO Firmware and Drivers are genuine HPE and latest

Set Host and VM iSCSI 10Gb NIC's for Jumbo Frames as per your vSAN recomendations.
Set and adjust TCP/IP properties as per your vSAN recomendations.

netsh int ipv4 set subint "<Name of NIC>" mtu=9000 store=persistent
netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=normal
netsh int tcp set global congestionprovider=ctcp
netsh int tcp set global ecncapability=enabled
netsh int tcp set global rss=enabled
netsh int tcp set global chimney=enabled
netsh int tcp set global dca=enabled

StarWind log identifies LTO9 device as follows:

'\\?\scsi#sequential&ven_hpe&prod_ultrium_9-scsi#5&3a99fb66&0&020100#{53f5630b-b6bf-11d0-94f2-00a0c91efb8b}': iScsiSptiDevice::openDevice: 'HPE Ultrium 9-SCSI Q3F9': adapter 3, bus 2, target 1, LUN 0; maxTransferLength 1048576, alignmentMask 0x3

NOTING: maxTransferLength 1048576

Picking out some other forum discussions I've adjusted the StarWind Config file as follows:

<MinBufferSize value="262144"/>
<iScsiFirstBurstLength value="262144"/>
<iScsiMaxBurstLength value="1048576"/>
<MaxPendingRequests value="1024"/>
<iScsiMaxReceiveDataSegmentLength value="1048576"/>

After restart of services, settings confirmed within Console Advanced settings
(NOTE: even though there is the handy option to modify these settings in the console, I find some settings are resticted and need to set in the config file instead, why have them modifyable and then not?)

I then adjust the MS iSCSI Initiator registry settings on the VM as per other forum discussions to match:

"[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\<Instance Number>\Parameters]"

<FirstBurthLength = HEX 0x0040000 (262144)>
<MaxBurstLength = HEX 0x00100000 (1048576)>
<MaxPendingRequests = HEX 0x00000400 (1024)>
<MaxRecvDataSegmentLength = HEX 0x00100000 (1048576)>
<MaxTransferLength = HEX 0x00100000 (1048576)>


LOG FILE RESULTS:

(starting of host target services):

1aa0 conf: 'MinBufferSize' = '262144'
1aa0 conf: 'iScsiFirstBurstLength' = '262144'
1aa0 conf: 'iScsiMaxBurstLength' = '1048576'
1aa0 conf: 'MaxPendingRequests' = '1024'
1aa0 conf: 'iScsiMaxReceiveDataSegmentLength' = '1048576'

1aa0 conf: Variable 'MinBufferSize' is set to '262144'.
1aa0 conf: Variable 'iScsiFirstBurstLength' is set to '262144'.
1aa0 conf: Variable 'iScsiMaxBurstLength' is set to '1048576'.
1aa0 conf: Variable 'MaxPendingRequests' is set to '1024'.
1aa0 conf: Variable 'iScsiMaxReceiveDataSegmentLength' is set to '1048576'.

(starting of vmhost & initiator)


160 Params: iScsiParameter::update: <<< Numeric param 'MaxRecvDataSegmentLength': received 1048576, accepted 1048576
160 Params: iScsiParameter::update: <<< Numeric param 'MaxBurstLength': received 1048576, accepted 1048576
160 Params: iScsiParameter::update: <<< Numeric param 'FirstBurstLength': received 262144, accepted 262144

160 Params: iScsiParams::createKeys: >>> MaxRecvDataSegmentLength=1048576.
160 Params: iScsiParams::createKeys: >>> MaxBurstLength=1048576.
160 Params: iScsiParams::createKeys: >>> FirstBurstLength=262144.

All looks good until immediately following these logs there is an additional entry:

1164 Params: iScsiParams::createKeys: >>> MaxRecvDataSegmentLength=8192.


SUMMARY:
After days and days of trolling the internet and all your forums and picking up on snippets of information adjustments as per the above.

Final transfer speed is STILL 250mbps :(
Nothing appears to have made any difference!

I'm thinking the last log (1164 Params: iScsiParams::createKeys: >>> MaxRecvDataSegmentLength=8192)
is a mismatch of parameters and is resetting the setting to the default 8192 which is capping the transfer?


what am I missing and why is your product so frustating to use?

Thanks for any assistance, its very much appreciated.
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Jan 11, 2024 4:34 pm

Welcome to StarWind Forum!
Sorry to read that it was quite a quest.
After days of wasted time finding dribs and snippets of information, it seems shame that you don't provide a more user friendly interface or more informataion for more modern equipment and devices.
StarWind Tape Redirector is an old free product that is not used much, unfortunately.

A single iSCSI session might not work well. You can test how the performance of multiple sessions changes by creating a StarWind *.img file and connecting it over multiple sessions.
Could you please let me know what is the underlying storage performance and how you measured it? I just want to mention that file copying is not a good test (a lot of factors that impact the performance).
You can benchmark diskspeed as described here https://www.starwindsoftware.com/best-p ... practices/.

Please also try aligning for FirstBurstLenght with MaxBurstLenght (regedit -> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000X\Parameters).
Installed StarWind iSCSI Accelerator on both Host and VM
For a single iSCSI session it does not improve performance. It helps when clients access storage over multiple iSCSI Sessions (reconnecting the sessions is needed for the filter driver to start working).
Set Host and VM iSCSI 10Gb NIC's for Jumbo Frames as per your vSAN recomendations.
What NIC do you use? Also, what is the storage configuration you have?
(NOTE: even though there is the handy option to modify these settings in the console, I find some settings are resticted and need to set in the config file instead, why have them modifyable and then not?)
Some settings require the service to be stopped to be modified. You need to edit it in the configuration file.
Jimbob
Posts: 2
Joined: Thu Jan 11, 2024 11:28 am

Tue Jan 16, 2024 12:59 pm

Thanks
I've adjusted FirstBurstLength to match as suggested
no change - still only 250mbps/30MBps transfer speed

I'm going to abandon and find another solution
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Tue Jan 16, 2024 2:04 pm

Can you please share the array performance?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Feb 26, 2024 8:24 pm

There's a performance limitation doing SPTI (SCSI Pass Thru) which is the method we use go access physical tapes on Windows from a user-mode app which StarWind on Windows is. SPTI does one command per time (no queue!) in a sync way, so numbers you're seeing are pretty good actually FOR THIS PARTICULAR CASE.

If you'll find a solution allowing you to share your LTO9 @ full speed *and* it would be free at the same time please let me know! I'd redirect our similar complain support cases there.

See, we could replace SPTI with a custom-written queue-enabled driver to boost the performance, but... Will you be ready to pay for such a solution? Or do you see it as "free only" tool?

Thanks!

Anton
Jimbob wrote:
Tue Jan 16, 2024 12:59 pm
Thanks
I've adjusted FirstBurstLength to match as suggested
no change - still only 250mbps/30MBps transfer speed

I'm going to abandon and find another solution
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply