Page 1 of 1

Discuss

Posted: Thu Oct 17, 2019 9:47 am
by Kasey00
Hi!

I understand that you guys discontinued the initiator. I suppose in favour of MS built-in initiator?

Well, I'm a home user and I'm considering iSCSI to centralise storage. However, I have grown accustomed to SSD disks and would like to have SSD-like performance even on remote drives.
I have been testing iSCSI for a while now and while performance is far from abysmal, it is still orders of magnitude slower than SSD.

Question: You already support caching for server side. Would you be willing to consider continuing initiator development in order to build-in the cache logic?It is always preferable to have both ends from the same provider and your VirtualSAN is looking really fine so far.

Thanks.

Re: Discuss

Posted: Sun Oct 20, 2019 9:28 am
by anton (staff)
Yes, Microsoft didn't want to WHQL our monolithic SCSI port iSCSI initiator and we didn't want to re-write it as a StorPort miniport. We didn't drop the code completely however, we ported it to user mode and it does iSCSI & iSER for StarWind synchronization traffic in between the nodes, which is way more efficient and error-free comparing to using Microsoft iSCSI initiator, something we did before. We have to rely on Microsoft iSCSI initiator for uplink connectivity still and you're 100% right here, but it has it's own issues and we ended up with two special drivers to workaround its problematic behavior: TCP loopback accelerator (we don't distribute it outside of our own products) bypasses most of TCP stack for simplicity and performance if we connect iSCSI in a loopback and iSCSI accelerator (this one is available for download and it's our free product) which helps to distribute every new TCP/iSCSI session on the new CPU core, while stone-age written Microsoft code just picks up random core for that leading to some cores being 100% used and other ones doing nothing, whole thing affects performance obviously.

Bad news: we're revenue driven company and don't see much of the profit in a low-margin SOHO space. Good news: for NVMe redirection you can use NVMe-oF Target (we have one for Windows as part of StarWind VSAN) and NVMe-oF Initiator (one we have as well as a free product) to solve your needs. For now you'll need RDMA capable Mellanox NICs, but we're working on NVMe-over-TCP as well.

Hope this helped :)