NVME-oF with 2 Machines without purchasing expansive NICs

Initiator (iSCSI, FCoE, AoE, iSER and NVMe over Fabrics), iSCSI accelerator and RAM disk

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
rweise
Posts: 2
Joined: Thu Aug 27, 2020 7:40 pm

Thu Aug 27, 2020 7:51 pm

Hallo,
on your website is written:
Problem
Common protocols – iSCSI, iSER, NFS, and SMB3 – are proven to be inefficient for presenting flash over the network. Having a single short command queue, they were designed to connect HDDs, not flash. Due to such design, these protocols cut off most of PCIe SSD performance (up to 50%). Moneywise, such a severe performance loss means that you may need more PCIe SSDs at some point, which is overkill for SMEs and ROBOs!
To date, means for presenting PCIe SSDs to the entire Hyper-V cluster are under development. NVMe-oF, the protocol tailored for enabling VMs to talk effectively to PCIe drives, is a perfect alternative to the traditional protocols, but there’s no NVMe-oF software initiator for Windows yet, meaning that clients cannot effectively use the hardware. Yes, there are NVMe-oF-ready network interface cards (NICs) allowing Hyper-V VMs to talk to PCIe SSDs. But buying expensive NICs right after purchasing a couple of flash drives is quite a big investment.
Solution
NVMe-oF keeps the latency of NVMe drives connected via the network lower than 10ms. Applications basically feel no difference between drives presented over the network and ones connected locally. The single short command queue is replaced with multiple independent queues, up to 64K commands each. Such design allows getting all the performance a PCIe SSD can provide.
StarWind NVMe-oF Initiator is the only free software solution for Windows, allowing you to enjoy all PCIe SSD benefits. It is a software module that connects a host (i.e., a client) to a block storage target (i.e., remote server). The solution allows the client to talk to the target over RDMA, meaning that all I/O can bypass the CPU. As a result, there’s more spare compute resources for VMs.
Bring the full NVMe-oF support to your environment at no cost!

Ok, pretty fine, exactly what I am looking for. But if I start the installation, the following message appears:

System Requirements
.....- Mellanox RDMA-capable Network Adapters on the machine with the initiator and on the machine with the NVME-OF Target...



That’s exactly NOT what i wanted to do: to buy „expensive NICs right after purchasing a couple of flash drives“ ! How does that go together??? Can you explain, please. I just want to bring two machines together, directory connected. Or am I wrong and your solution doesn‘t fit my idea?

Regards,
Raphael
yaroslav (staff)
Staff
Posts: 2904
Joined: Mon Nov 18, 2019 11:11 am

Mon Aug 31, 2020 10:05 am

Greetings,

Sorry for such a delayed response and welcome to the StarWind forum! True, our implementation of NVMe-oF Initiator requires RDMA-capable NICs (learn why here https://www.starwindsoftware.com/resour ... ite-paper/). BUT, it is needed only to present PCIe drives over NVMe-oF.
You still can present NVMe drives over good old iSCSI to make servers working together.
Kasey009
Posts: 2
Joined: Wed May 19, 2021 10:22 am

Wed May 19, 2021 10:26 am

buying expensive NICs right after purchasing a couple of flash drives is quite a big investment.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri May 21, 2021 12:43 pm

StarWind NVMe-oF initiator supports NVMe-over-TCP now so you don't need to have any RDMA NICs now. TCP flow won't make your CPUs happy, but you'll get the performance numbers you want right away.
Kasey009 wrote:buying expensive NICs right after purchasing a couple of flash drives is quite a big investment.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply