nvme initiator tuning

Initiator (iSCSI, FCoE, AoE, iSER and NVMe over Fabrics), iSCSI accelerator and RAM disk

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Fri Sep 01, 2023 12:07 pm

on iscsi initiator it's possible to modify

“MaxTransferLength”=dword:00040000
“MaxBurstLength”=dword:00040000
“FirstBurstLength”=dword:00040000
“MaxRecvDataSegmentLength”=dword:00040000

is there a way to do same for nvme initiator ?
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Fri Sep 01, 2023 7:31 pm

Hi,

Check under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\StarNVMeoF\Parameters
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Tue Sep 05, 2023 7:32 am

Hello

it contains

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\StarNVMeoF\Parameters]
"Auto"=dword:00000000
"BusType"=dword:0000000a
"IoLatencyCap"=dword:00000000
"IoFlags"=dword:00000003

so i just need to add others keys at same level ?
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Tue Sep 05, 2023 7:48 am

Greetings,

No, you need only to set up MaxTransferLength to the optimal value (e.g., 8K or 4K).
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Tue Sep 05, 2023 8:08 am

i'm evaluating nvme initiator against iscsi.

On my case i have esxi using mellanox with sr iov , vm use VF to have direct mellanox access

configuration using iscsi use large block as my application did large io (lets say near 128k on read , 256k on write)

i have best result on iscsi using transfertlength @256k,

i just try 4k with starwind and results are not good , what is the max size i can use ? , by default it's seems to be 64k (it's what i see on storage frontend)
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Tue Sep 05, 2023 8:32 am

Thanks for your update. Feel free to play around with the parameters as all of them need careful alignment for each system.
8K block size was noticed to be the best fit for certain targets. You can try using any other value for tests.
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Tue Sep 05, 2023 9:11 am

ok thx

i already tested 4k 8k and 16k , i see large improvement against default value with 16k so you're right i need to dig on this ;)
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Tue Sep 05, 2023 9:28 am

Greetings,

Please keep me posted. Thanks!
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Tue Sep 05, 2023 4:00 pm

feedback on my side

iscsi vs nvme on my particular case i see 4% increase on bandwidth for nvme with 16k (the one that give me better result) against iscsi configure to 256k
increasing to more than 16k with nvme initiator is counterproductive from what i see
but also an increase of 10% of cpu usage
Not enough for me to switch to nvme for my particular use case

Thanks for your support
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Tue Sep 05, 2023 10:34 pm

Thanks for your update. You are always welcome.
Were you using NVMe-of over TCP or RDMA? Also, could you please tell me more about the network topology of the system and the underlying storage?
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Thu Sep 07, 2023 8:04 am

We use nvme over roce ,
storage is huawei 5510 configured with hybrid pool (ssd and nlsas of 14TB),
between storage and esxi we use huawei ethernet 25Gb switchs also (two TOR),
we have mellanox connectx4 as smart nic on esxi,
we use sr iov to use vf on vm and two nic configured on vm side each with particular vlan.
Storage is thin , size 48TB, one lun only , io profil is 256k io size for write and variable on read with majority of iops between 8k and 16k, full random for write, sequential for read
we did this test with nvme to see which improvment we can see with cost associated to add smart io nic card also on storage array
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Thu Sep 07, 2023 10:20 am

Thanks for your update.
storage is huawei 5510 configured with hybrid pool (ssd and nlsas of 14TB),
Did you test underlying storage performance separately?
Storage is thin
Based on my previous experience, thin storage eats a lot of performance due to additional i/o for provisioning storage.

What I am saying is that the storage settings can be optimized too.
Xavier
Posts: 8
Joined: Thu Aug 31, 2023 4:24 pm

Wed Sep 13, 2023 7:35 am

Hello

We compare same storage configuration only changing protocol iscsi vs nvme over roce

This storage only support thin devices, and i agree with you it could affect performance during unmap command, but on our side we have desactivate it to not put more pressure on backend
yaroslav (staff)
Staff
Posts: 3103
Joined: Mon Nov 18, 2019 11:11 am

Wed Sep 13, 2023 11:38 am

Xavier,

Quite an unfortunate limitation.
Post Reply