Considering switching from FreeNAS

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Fri Feb 26, 2016 4:38 pm

Vladislav (Staff) wrote:Hi Cabuzzi!

Ouch! It looks like I have provided a really old guidance for you and I am really sorry for that :(

We actually have a new one available here: https://www.starwindsoftware.com/techni ... r-Pool.pdf

Please refer to "Applying multipathing" section of this doc and let me know if it helps.

Meanwhile I am configuring a test setup in my lab in order to double check it.

Thank you and sorry for this silly mistake :(
Vlad,

That is much, much better. Thank you for that.

I won't be able to make these changes until later this afternoon. After reviewing the documentation, however, I ended up with one question:

1) Under the "Configure iSCSI Initiator", it talks about editing the /etc/iscsi/iscsid.conf file. My only concern is that I do still have the FreeNAS storage attached, which is also iSCSI-based (see output of "multipath -ll" in my post above). Do you feel that is safe for the currently running machines on that old FreeNAS storage?
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Fri Feb 26, 2016 6:03 pm

Hi, Cabuzzi,

These settings will not disrupt your FreeNAS connectivity since these are only iSCSI tweaks for better performance. Latest XenServer version has:

Code: Select all

node.startup = automatic
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
You can surely leave these unchanged and proceed with the next steps.
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Sat Feb 27, 2016 1:50 am

Awesome. Looking forward to testing it over the weekend!


Chris
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Sun Feb 28, 2016 7:14 pm

Well, so far... things aren't going well. I completely blew away my configuration on the SW server and followed the guide as closely as I could. As I mentioned in previous posts, I could connect with my last setup just fine (using what I knew of Xen and iSCSI, and applying it to SW), but did not have multipath activity on the four connections.

Following the guide, I can get up to the point where I run the script. The script freezes after step 8 in the "Adding Shared Storage using automated script" section. I enter the four IPs, separated by three commas... just as I would in the Xen CLI, or via the GUI when adding an SR.

I then tried the manual steps listed under "Adding Shared Storage manually". Discovery works fine, as well as connecting to the targets in step 3. Running the multipath check of "multipath -ll" (implied in step 5) returns only the previously connected FreeNAS SR. If I continue onto step 6, the xe sr-probe command hangs indefinitely.

I've also tried adding the SR via the GUI. The target LUN shows up correctly, but it takes about 10x longer for the target LUN to show up. When it does, trying to finish the GUI reports an error of "Invalid argument. Check your settings and try again".

What I've tried:
- Ensuring I can ping all the SW iSCSI interfaces (101.7, 102.7, 103.7, and 104.7) from both Xen hosts
- Telnet from each Xen host to port 3260, on all SW interfaces
- Completely disabled Windows Firewall on the SW server
- Rebooted both Xen hosts in the pool

Here is the output of running the commands in the guide:

Code: Select all

[root@xen1 ~]# iscsiadm -m discovery -t st -p 192.168.101.7
iscsiadm -m discovery -t st -p 192.168.102.7
iscsiadm -m discovery -t st -p 192.168.103.7
iscsiadm -m discovery -t st -p 192.168.104.7
iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.101.7 -l
iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.102.7 -l
iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.103.7 -l
iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.104.7 -l
192.168.101.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.102.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.1.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.103.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.104.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
[root@xen1 ~]# iscsiadm -m discovery -t st -p 192.168.102.7
192.168.102.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.101.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.1.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.103.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.104.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
[root@xen1 ~]# iscsiadm -m discovery -t st -p 192.168.103.7
192.168.103.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.101.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.102.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.1.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.104.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
[root@xen1 ~]# iscsiadm -m discovery -t st -p 192.168.104.7
192.168.104.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.101.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.102.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.1.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
192.168.103.7:3260,1 iqn.2004-07.com.cabuzzi.sw1:iscsi1
[root@xen1 ~]# iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.101.7 -l
Logging in to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.101.7,3260] (multiple)
Login to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.101.7,3260] successful.
[root@xen1 ~]# iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.102.7 -l
Logging in to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.102.7,3260] (multiple)
Login to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.102.7,3260] successful.
[root@xen1 ~]# iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.103.7 -l
Logging in to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.103.7,3260] (multiple)
Login to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.103.7,3260] successful.
[root@xen1 ~]# iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.104.7 -l
Logging in to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.104.7,3260] (multiple)
Login to [iface: default, target: iqn.2004-07.com.cabuzzi.sw1:iscsi1, portal: 192.168.104.7,3260] successful.
[root@xen1 ~]# iscsiadm -m node -T iqn.2004-07.com.cabuzzi.sw1:iscsi1 -p 192.168.103.7 -l
[root@xen1 ~]# multipath -ll
36589cfc000000b8c7d2eba0fc493b9e8 dm-0 FreeBSD,iSCSI Disk
size=7.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:0:0 sdb 8:16  active ready running
  |- 3:0:0:0 sdc 8:32  active ready running
  |- 4:0:0:0 sdd 8:48  active ready running
  `- 5:0:0:0 sde 8:64  active ready running
[root@xen1 ~]# xe sr-probe type=lvmoiscsi device-config:target=192.168.101.7,192.168.102.7,192.168.103.7,192.168.104.7 device-config:targetIQN=*
^C
(had to halt the command, as it was not replying)

Here is the output of my /etc/multipath.conf file:

Code: Select all

[root@xen1 ~]# more /etc/multipath.conf
# This configuration file is used to overwrite the built-in configuration of
# multipathd.
# For information on the syntax refer to `man multipath.conf` and the examples
# in `/usr/share/doc/device-mapper-multipath-*/`.
# To check the currently running multipath configuration see the output of
# `multipathd -k"show conf"`.
defaults {
        user_friendly_names     no
        replace_wwid_whitespace yes
        dev_loss_tmo            30
}
devices {
        device {
                vendor                  "DataCore"
                product                 "SAN*"
                path_checker            "tur"
                path_grouping_policy    failover
                failback                30
        }
        device {
                vendor                  "DELL"
                product                 "MD36xx(i|f)"
                features                "2 pg_init_retries 50"
                hardware_handler        "1 rdac"
                path_selector           "round-robin 0"
                path_grouping_policy    group_by_prio
                failback                immediate
                rr_min_io               100
                path_checker            rdac
                prio                    rdac
                no_path_retry           30
        }
        device {
                vendor                  "DGC"
                product                 ".*"
                detect_prio             yes
                retain_attached_hw_handler yes
        }
        device {
                vendor                  "EQLOGIC"
                product                 "100E-00"
                path_grouping_policy    multibus
                path_checker            tur
                failback                immediate
                path_selector           "round-robin 0"
                rr_min_io               3
                rr_weight               priorities
        }
        device {
                vendor                  "IBM"
                product                 "1723*"
                hardware_handler        "1 rdac"
                path_selector           "round-robin 0"
                path_grouping_policy    group_by_prio
                failback                immediate
                path_checker            rdac
                prio                    rdac
        }
        device {
                vendor                  "NETAPP"
                product                 "LUN.*"
                dev_loss_tmo            30
        }
        device {
                vendor                  "FreeBSD"
                product                 "iSCSI Disk"
                path_grouping_policy    multibus
                path_selector           "round-robin 0"
                rr_min_io               1
        }
        device {
                vendor                  "STARWIND"
                product                 "STARWIND*"
                path_grouping_policy    multibus
                path_checker            "tur"
                failback                immediate
                path_selector           "round-robin 0"
                rr_min_io               3
                rr_weight               uniform
                hardware_handler        "1 alua"
        }
}
*** Per darklight's recommendation, I made no changes to the iscsid.conf file.

Here is a screenshot of the error when trying to add the SR via the XenCenter GUI:
Image

One other thing I noticed... there seems to be no (obvious) way to specifically bind the four IPs I have allocated for iSCSI paths to the target in SW. When doing a netstat -an, it appears 3260 is listening on all IPs. Ideally, wouldn't you want 3261 only bound to the "management" interface of the SW server, and 3260 only bound to the IPs you mean to dedicate to iSCSI traffic?

Interestingly enough, netstat shows 3260 listening on all IPs (even the mgmt connection), as well as a connection to each of the Xen hosts... even though a "multipath -ll" in Xen only shows the FreeNAS connections:

Code: Select all

C:\Users\chris.adm>netstat -an | "findstr" 3260
  TCP    127.0.0.1:3260         0.0.0.0:0              LISTENING
  TCP    192.168.1.7:3260       0.0.0.0:0              LISTENING
  TCP    192.168.1.7:3260       192.168.1.50:56721     ESTABLISHED
  TCP    192.168.101.7:3260     0.0.0.0:0              LISTENING
  TCP    192.168.101.7:3260     192.168.101.50:49165   ESTABLISHED
  TCP    192.168.101.7:3260     192.168.101.51:39739   ESTABLISHED
  TCP    192.168.102.7:3260     0.0.0.0:0              LISTENING
  TCP    192.168.102.7:3260     192.168.102.50:55228   ESTABLISHED
  TCP    192.168.102.7:3260     192.168.102.51:38212   ESTABLISHED
  TCP    192.168.103.7:3260     0.0.0.0:0              LISTENING
  TCP    192.168.103.7:3260     192.168.103.50:38704   ESTABLISHED
  TCP    192.168.103.7:3260     192.168.103.51:52311   ESTABLISHED
  TCP    192.168.104.7:3260     0.0.0.0:0              LISTENING
  TCP    192.168.104.7:3260     192.168.104.50:37860   ESTABLISHED
  TCP    192.168.104.7:3260     192.168.104.51:53057   ESTABLISHED
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Mon Feb 29, 2016 3:19 pm

Hi Chris,

It seems to me, this thread is full of misunderstandings and misleads. I will try to get what you are actually doing and to unravel this tangle.

Are you using StarWind Virtual SAN Free? Right?

In that case, you will not be able to connect StarWind HA devices over iSCSI. As I already wrote in the previous comment, you have to enable Scale-Out File Server role on your StarWind boxes and provide an HA NFS share to your XEN. I have provided you with the links to guides that are somehow outdated but still give you a general idea of how it works.

Unfortunately, multipathing and other iSCSI whistles is possible only when using iSCSI which is not possible in case of free version.
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Mon Feb 29, 2016 6:21 pm

I must've misinterpreted what was available in the free version. I was under the impression iSCSI was allowed under the free version, but HA iSCSI was not. By HA iSCSI, I was considering HA to mean a "cluster" or failover mode between two actual nodes (ie: two Starwind servers) and not HA as in independent iSCSI connections (ie: multiple NICs).

Since I was able to connect to an iSCSI share last week (before making the specific multipath.conf changes in Xen, recommended by the newer installation guide you posted), does that mean I have a trial license for the paid product? I am 99% sure I chose the Free version when I requested a license...
buzzzo
Posts: 13
Joined: Wed Jan 20, 2016 6:39 pm

Mon Feb 29, 2016 8:12 pm

Cabuzzi free version gives you iscsi only on loopback (eg: local connection inside the starwind hosts).
This is useful if you plan to implement an nfs cluster running on top of microsoft failover cluster (just present the starwind lun as shared csv volume on the microsoft cluster and run an nfs share on top of it)

As per xenserver: i was able to run even an hyperconverged setup (aka: 2 starwind vms on two xenservers hosts) and correctly export
the starwind lun(s) as iscsilvm shared storage to xenserver 6.5

Multipath seems to work on the four iscsi connections.
Feel free to ask me some hints.
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Mon Feb 29, 2016 9:30 pm

buzzzo wrote:Cabuzzi free version gives you iscsi only on loopback (eg: local connection inside the starwind hosts).
This is useful if you plan to implement an nfs cluster running on top of microsoft failover cluster (just present the starwind lun as shared csv volume on the microsoft cluster and run an nfs share on top of it)

As per xenserver: i was able to run even an hyperconverged setup (aka: 2 starwind vms on two xenservers hosts) and correctly export
the starwind lun(s) as iscsilvm shared storage to xenserver 6.5

Multipath seems to work on the four iscsi connections.
Feel free to ask me some hints.
I guess I am confused as to the licensing. If I am correct, the "free" license supports either hyperconverged, or converged... separated storage/compute. I have two Xen servers, and one available storage server. My other storage server is running FreeNAS, where my VMs are currently living.

As per my original post, I am looking to move away from FreeNAS (preferably to another free solution) due to poor iSCSI performance. If Starwind VSAN Free cannot present a multipathed iSCSI LUN, then I am officially very frustrated. My time (as well as all of yours) is valuable. I am not really interested in paying for a license in a lab environment, but also considered this as an unofficial evaluation of Starwind, as I consult on a large number of projects per year.
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Tue Mar 01, 2016 3:27 am

Well, I'm home... and I just double-checked my license. I most definitely have the VSAN Free license applied, and I have already been able to make multiple iSCSI connections... just without MPIO working very well. Since making the multipath.conf changes in the referenced PDF, I haven't been able to connect at all.

I don't think this is a license issue. I am only running one SW box... no HA.
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Fri Mar 04, 2016 11:00 pm

In the case of running one box, you should be fine with iSCSI and multipathing.
As for the issues you have, I just tested similar scenario in my home lab. 2 VMs with XEN and 1 VM with Starwind. I used the guide provided by Vladislav (Staff) above and successfully connected my Starwind LUN to both Xen servers using the script.
Obtaining this strange SR number did take some time (like a couple of minutes I believe), but worked.

Have you tried with only one iSCSI connection/IP? Without multipathing? Did it work?
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Sat Mar 05, 2016 7:24 pm

I believe I may be having some odd issues with one of more of my Xen hosts. I am working on that now and hope to try and reconnect the SW SAN when I'm done (hopefully tonight).

Thanks for testing it. At least I know I did everything correctly. XenServer can be finicky...
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Wed Mar 09, 2016 7:12 pm

@Cabuzzi, can I ask you if you have an update for us please?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Thu Mar 10, 2016 3:16 am

Sorry for the lack of updates... I'm actually on work travel right now. Right before I left, I was able to actually connect XenServer to the iSCSI target on the SW server and began to transfer files. Unfortunately, even with the multipath.conf file configured exactly the way your instructions indicated, I still did not see multipath activity. One of the four connections was maxed out at gigabit speeds.

What I don't know if this is a XenServer issue. The issue with being able to connect at all was certainly a result of goofed up XenServers... due to leftover iSCSI configs that did not clean up when I removed one of the previously attached FreeNAS instances. This time, I completely rebuilt the XenServers and started from scratch... only using the configuration provided by Starwind.

@darklight
When you set this up in your home lab, were you also attempting to use multipathing? Did you see network traffic over all the interfaces?
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Fri Mar 11, 2016 2:38 pm

Unfortunately I did not test so deep. Just configured and ensured it works.
Anyways, multipathing is more Xen related thing since Starwind listens and accepts connections on all possile interfaces and supports multipathing as default and by design.
Cabuzzi
Posts: 23
Joined: Fri Feb 12, 2016 3:26 pm

Sat Mar 12, 2016 6:32 pm

No problem. Could it also be that the traffic is coming from a FreeNAS machine? Basically, I am using XenCenter to move machine storage from the old storage on the FreeNAS server to the new storage on the SW server. If there is an issue with multipathing on the FreeNAS machine, that would limit how much MPIO could be used, right?

I believe I will stick with SW for now, if not for anything else, but to learn new systems. Once running, it is a clean setup and does have the advantages of Windows. While I will miss ZFS and the flexibility of nonRAID storage, I will give SW a chance to change my mind on the non-RAID approach.

I have another Dell R710 configured the same as the one I'm using for SW now. What is the limit of what I can do with the Free license I applied for? Is it worth setting up another SW server? It's hard to tell what type of HA (if any) can be utilized with the free license. Is it only when you're using SMB storage, iSCSI, or no HA at all with the Free license?
Post Reply