Considering switching from FreeNAS
Posted: Fri Feb 12, 2016 5:33 pm
Hey everyone. First post here.
I've been a FreeNAS user for many years now, but am at the point where I am ready to try something new. The major catalyst for this has been going to MPIO iSCSI (quad port gbnic) and not seeing the utilization I expected... even with multiple operations. No matter what I do, XenServer only seems to want to read/write from one channel... even when migrating multiple VMs concurrently. I'm a Citrix employee and have seen XenServer blaze over iSCSI, but not with my implementations (lol). It obviously could be something I'm doing wrong, but I don't think so.
When I try and ask about it on the FreeNAS forums, Cyberjock there (very smart dude) is usually the first to reply with: "You need more RAM!" <paraphrased> or "CoW doesn't play well with iSCSI". Over the years, the amount of RAM I've had available has grown from 4GB (in an HP N40L microserver) all the way up to 64GB (Dell R710). The results have been the same, regardless... despite Cyber's insistence that RAM is the culprit. Maybe for a much larger environment... but we're talking about two Xen hosts and 1-2 FreeNAS boxes. Maybe there is a guide out there for FreeNAS to "tune" ZFS/iSCSI to enable MPIO to work the way it's designed, but I haven't found one. In fact, most FreeNAS forum requests for such assistance end up with the same thing. The end result of my troubleshooting/research is that:
1) It's possible, but not worth the hassle. Cyber has often indicated that you need to throw more hardware at the problem, as opposed to tweaking to get things working better.
2) MPIO w/iSCSI is inherently limited by FreeNAS and/or ZFS, and there really is nothing you CAN do.
So at this point, I am considering other alternatives. I will say, I LOVE ZFS. In my opinion, it is the best file system I've worked with. ZFS snaps have saved my *** many times, and being able to replicate over SSH has been a suitable alternative to FreeNAS not being capable of HA. Still, I am willing to try something else that may better suit my needs, and will work well with my hardware.
That need is essentially being able to better utilize the 4-gig NICs I use in my servers (or the ones built into the R710... Broadcom, I believe). I have roughly 25-30 machines MAX, spread across the two XenServers, along with two FreeNAS SRs... one per server, but have considered going to two SRs per R710. Since my switching environment supports 802.1ad, I have even tried LACP/NFS, but the XenServer gurus have indicated that MPIO iSCSI would be better, performance-wise
My hardware environment is as such:
- Four Dell R710s:
-- (2) with 64GB RAM and (2) Xeon X5570s (used for the Xen hosts)
-- (2) with 32GB RAM and (1) Xeon E5504 (used for the FreeNAS hosts)
- TP-Link TL-SG2424 layer 3 GigE switch with 802.1ad (LACP) support
- Integrated quad-port NIC in R710 (also tested with separate quadport HP NC364T Intel-based adapters... same results)
- I also have a Dell 2950 with 32GB of RAM (and two X5470 Xeons) and an HP NC364T that I was using previously, which behaved roughly the same in FreeNAS... It just used more power, and was MUCH louder.
Focusing on the two R710s I'm using for storage:
- Each has the aforementioned single Xeon 4-core E5504 and 32GB of ECC RAM
- One has a Dell Perc 6/i, with the six drives presented in RAID0 (which was my only option)
- One has a Dell R200 (per FreeNAS forum recommendations) to directly pass-through the drives
- Each has the latest firmware (with the exception of the H200 card)
- One has three 2TB 5400RPM Western Digital Red drives in a ZFSv1 pool, the other has three 2TB 7200RPM Hitachis
- Each has a 120-240GB SSD for L2ARC cache
- Each has a single file-based (not device) Xen SR presented to the two-host XenServer pool
Other notables:
- The Dell H200 is firmware version 7. FreeNAS gripes about it not being version 20, and I've also read that others have crossflashed the adapter to an LSI firmware, to be be able to truly passthrough the drives to FreeNAS as JBOD... not RAID0, and take advantage of SMART
- Both XenServer and FreeNAS are pretty much out-of-the-box deployments, with little customization
- Both XenServer hosts have their iSCSI software initiators setup with all four IPs
- Both XenServer hosts have multipathing enabled
- I know of no specific need (or way) to enable MPIO on FreeNAS. If this is something I'm missing, that might explain everything
So now that I've explained what I've been trying to achieve with FreeNAS, is Starwinds capable of this? Obviously, FreeNAS being "free" is a huge motivator for me, as this is just a home domain/lab for deploying Citrix products like XenApp, XenDesktop, NetScaler, etc. I also host my family's domain on a very similar environment, but my storage needs there are much lighter (though I would love MPIO working there as well.
I've been a FreeNAS user for many years now, but am at the point where I am ready to try something new. The major catalyst for this has been going to MPIO iSCSI (quad port gbnic) and not seeing the utilization I expected... even with multiple operations. No matter what I do, XenServer only seems to want to read/write from one channel... even when migrating multiple VMs concurrently. I'm a Citrix employee and have seen XenServer blaze over iSCSI, but not with my implementations (lol). It obviously could be something I'm doing wrong, but I don't think so.
When I try and ask about it on the FreeNAS forums, Cyberjock there (very smart dude) is usually the first to reply with: "You need more RAM!" <paraphrased> or "CoW doesn't play well with iSCSI". Over the years, the amount of RAM I've had available has grown from 4GB (in an HP N40L microserver) all the way up to 64GB (Dell R710). The results have been the same, regardless... despite Cyber's insistence that RAM is the culprit. Maybe for a much larger environment... but we're talking about two Xen hosts and 1-2 FreeNAS boxes. Maybe there is a guide out there for FreeNAS to "tune" ZFS/iSCSI to enable MPIO to work the way it's designed, but I haven't found one. In fact, most FreeNAS forum requests for such assistance end up with the same thing. The end result of my troubleshooting/research is that:
1) It's possible, but not worth the hassle. Cyber has often indicated that you need to throw more hardware at the problem, as opposed to tweaking to get things working better.
2) MPIO w/iSCSI is inherently limited by FreeNAS and/or ZFS, and there really is nothing you CAN do.
So at this point, I am considering other alternatives. I will say, I LOVE ZFS. In my opinion, it is the best file system I've worked with. ZFS snaps have saved my *** many times, and being able to replicate over SSH has been a suitable alternative to FreeNAS not being capable of HA. Still, I am willing to try something else that may better suit my needs, and will work well with my hardware.
That need is essentially being able to better utilize the 4-gig NICs I use in my servers (or the ones built into the R710... Broadcom, I believe). I have roughly 25-30 machines MAX, spread across the two XenServers, along with two FreeNAS SRs... one per server, but have considered going to two SRs per R710. Since my switching environment supports 802.1ad, I have even tried LACP/NFS, but the XenServer gurus have indicated that MPIO iSCSI would be better, performance-wise
My hardware environment is as such:
- Four Dell R710s:
-- (2) with 64GB RAM and (2) Xeon X5570s (used for the Xen hosts)
-- (2) with 32GB RAM and (1) Xeon E5504 (used for the FreeNAS hosts)
- TP-Link TL-SG2424 layer 3 GigE switch with 802.1ad (LACP) support
- Integrated quad-port NIC in R710 (also tested with separate quadport HP NC364T Intel-based adapters... same results)
- I also have a Dell 2950 with 32GB of RAM (and two X5470 Xeons) and an HP NC364T that I was using previously, which behaved roughly the same in FreeNAS... It just used more power, and was MUCH louder.
Focusing on the two R710s I'm using for storage:
- Each has the aforementioned single Xeon 4-core E5504 and 32GB of ECC RAM
- One has a Dell Perc 6/i, with the six drives presented in RAID0 (which was my only option)
- One has a Dell R200 (per FreeNAS forum recommendations) to directly pass-through the drives
- Each has the latest firmware (with the exception of the H200 card)
- One has three 2TB 5400RPM Western Digital Red drives in a ZFSv1 pool, the other has three 2TB 7200RPM Hitachis
- Each has a 120-240GB SSD for L2ARC cache
- Each has a single file-based (not device) Xen SR presented to the two-host XenServer pool
Other notables:
- The Dell H200 is firmware version 7. FreeNAS gripes about it not being version 20, and I've also read that others have crossflashed the adapter to an LSI firmware, to be be able to truly passthrough the drives to FreeNAS as JBOD... not RAID0, and take advantage of SMART
- Both XenServer and FreeNAS are pretty much out-of-the-box deployments, with little customization
- Both XenServer hosts have their iSCSI software initiators setup with all four IPs
- Both XenServer hosts have multipathing enabled
- I know of no specific need (or way) to enable MPIO on FreeNAS. If this is something I'm missing, that might explain everything
So now that I've explained what I've been trying to achieve with FreeNAS, is Starwinds capable of this? Obviously, FreeNAS being "free" is a huge motivator for me, as this is just a home domain/lab for deploying Citrix products like XenApp, XenDesktop, NetScaler, etc. I also host my family's domain on a very similar environment, but my storage needs there are much lighter (though I would love MPIO working there as well.