Using iSCSI NASs (QNAP, Synology, etc.)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Sun May 24, 2015 3:57 am

I'm not seeing many references to people using devices like QNAP, Synology or others as the block devices for StarWind Virtual SAN. Can StarWind Virtual SAN use any iSCSI targets as storage/block devices that can be added to its storage pools?

I saw one thread in these forums that made reference to the fact that the StarWind iSCSI initiator has been deprecated (pulled out of production and no longer supported), and that people should use Microsoft's iSCSI initiator instead when needing to connect to a target.

So does this mean that using other iSCSI targets as block/storage devices for the StarWind Virtual SAN is supported?

Thanks,
-
Doug Mortensen
Impala Networks, Inc.
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun May 24, 2015 12:14 pm

Doug, yes you can do it no problem. We don't do iSCSI connectivity for now but you can place StarWind containers (IMG or LSFS) on SMB3/NFSv4 shares/mount points and we'll virtualize, accelerate (local caches & log-structuring & shortened I/O path) and make fault-tolerant your typically backup storage. We'll publish a manual and a white paper on this topic soon. Please stay tuned :) Thank you!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Sun May 24, 2015 6:17 pm

Anton thanks for the response. Just to be clear, are you saying that using Microsoft's iSCSI initiator for this purpose is indeed supported? Or are you saying that the only way that we can/should use a QNAP, Synology, FreeNAS, etc. is if we have StarWind v.San's Windows OS access the QNAP/Synology/FreeNAS via SMB and/or NFS?

Thanks,
-
Doug
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun May 24, 2015 9:09 pm

You use StarWind to "virtualize" your external storage (SAN/NAS/DAS does not matter). Then you connect with whatever you want to it: can be MSFT iSCSI, MSFT SMB3, Oracle NFSv4 etc. Initiators (clients) do "talk" to StarWind and we hide all background housekeeping from user.
impaladoug wrote:Anton thanks for the response. Just to be clear, are you saying that using Microsoft's iSCSI initiator for this purpose is indeed supported? Or are you saying that the only way that we can/should use a QNAP, Synology, FreeNAS, etc. is if we have StarWind v.San's Windows OS access the QNAP/Synology/FreeNAS via SMB and/or NFS?

Thanks,
-
Doug
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Tue May 26, 2015 10:27 pm

Thanks for the clarification on this. I ask, because back in February one of your sales guys told me that we had to use direct attached storage. He tried to point me toward the Dell PE R720/R730. We are a Dell shop and didn't mind the recommendation. But it would be nice to use iSCSI sometimes due to its flexibility and true hardware agnostic approach.

Thanks much!
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu May 28, 2015 11:26 am

Most of our production scenarios are with DAS. iSCSI uplink would be of a much higher latency compared to SATA / SAS.
impaladoug wrote:Thanks for the clarification on this. I ask, because back in February one of your sales guys told me that we had to use direct attached storage. He tried to point me toward the Dell PE R720/R730. We are a Dell shop and didn't mind the recommendation. But it would be nice to use iSCSI sometimes due to its flexibility and true hardware agnostic approach.

Thanks much!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Thu May 28, 2015 2:38 pm

Anton,

Do you feel that DAS is as flexible and scalable as iSCSI?
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu May 28, 2015 4:31 pm

We compare apples to oranges here...
impaladoug wrote:Anton,

Do you feel that DAS is as flexible and scalable as iSCSI?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Thu May 28, 2015 5:19 pm

I take that as a "no".
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jun 02, 2015 10:48 am

OK, let me clarify on that :)

1) We prefer to deal with DAS as it's most cost-effective way to add capacity to hypervisor cluster. We take disks, flash and Ethernet and build a fault-tolerant pool from that.

2) It's absolutely OK to take existing NAS or SAN instead of hard disks and layer StarWind on top so every NAS unit would be effectively as some sort of a "RAID inside" hard disk.
It has typically iSCSI or NFS or SMB connection (Ethernet) so is higher latency and less reliable (switched fabric) then low latency and highly reliable non-switched fabric SAS or SATA.
But it's doable and is supported scenario. Less people use this scenario compared to 1) but it's OK.
impaladoug wrote:I take that as a "no".
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Tue Jun 02, 2015 3:57 pm

Anton,

Thanks for the clarification.

Now I'm really curious on the bandwidth we get out of a Dell PERC H800 connected to external JBOD trays (i.e. at what point would we max out a single card's bandwidth [or a single PCI Express bus])?

Based on a quick check, the H800 uses PCIe 2.0 x8.

PCIe 2.0 (per wikipedia) is 500MB/lane. So an 8x (8-lane) card is connected to a PCIe bus that can provide 4GBps (in networking terms this would be ~32Gbps).

However, it seems to me that the major bottleneck here is the H800 card itself. It can only support a data transfer rate of 600MBps (per http://accessories.us.dell.com/sna/popu ... l=en&s=dfb).

600MBps storage card (H800) in a 4GBps bus is only using 15% of the bus' capacity.

I checked the H810 and the Data Transfer Rate is still showing 600MBps for that product too...
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
Jhon_Smith
Posts: 20
Joined: Thu Jun 04, 2015 9:14 am

Mon Jun 08, 2015 2:20 pm

600MBps storage card (H800) in a 4GBps that is right, because it says that its 4 Gigabits. Thus, transferring it to Gigabytes will give you 500MBps.
User avatar
Oles (staff)
Staff
Posts: 91
Joined: Fri Mar 20, 2015 10:58 am

Wed Jun 17, 2015 1:25 pm

Hello guys!
Please let me know if you have any questions left. Thank you!
Post Reply