Migration away from ZFS, are non-raid HBA's supported?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Anatoly (staff), Max (staff)

Post Reply
mpaska
Posts: 3
Joined: Fri Dec 26, 2014 2:36 am

Fri Dec 26, 2014 2:43 am

I'm looking to move our backup cluster away from ZFS, to a Hyper-Converged StarWind set-up with VMware so that we can also use it for some development/testing as our backup performance needs are quite low.

We have 2x SuperMicro servers with non-raid HBA's in them (LSI 9207-8e) each connected to a storage shelf of 12x 4TB SATA NAS drives. Previously with ZFS, it handles parity/disk redundancy via raidZ groups.

Is it possible and recommended to use StarWind with non-raid HBA controllers? If so, what's the best way to set this all up? Present each disk to the Windows/StarWind VM and create a software RAID volume?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Dec 26, 2014 7:53 am

Generally speaking it's a bad idea to pass-thru disks to StarWind VM, allow Windows to build a software RAID (Storage Spaces) from them and put StarWind Virtual Volumes on top of that. Making long story short: too complex to work :)

Good discussion on a SpiceWorks why you should not use software RAID @ VM level, see:

http://community.spiceworks.com/topic/2 ... -pros-cons

I'd strongly suggest you to have properly configured and managed RAID data store visible to VMware kernel as a virtual LUN already. As VMware software RAID component is missing (for a reason) you'll have to stick with a hardware ones.
LSI's are inexpensive (used ones on eBay are dirt cheap), well supported by VMware (HCL) and StarWind loves them in our lab. Ping me if you want to know models we use to build a "test bed" setups.

Good luck :)
mpaska wrote:I'm looking to move our backup cluster away from ZFS, to a Hyper-Converged StarWind set-up with VMware so that we can also use it for some development/testing as our backup performance needs are quite low.

We have 2x SuperMicro servers with non-raid HBA's in them (LSI 9207-8e) each connected to a storage shelf of 12x 4TB SATA NAS drives. Previously with ZFS, it handles parity/disk redundancy via raidZ groups.

Is it possible and recommended to use StarWind with non-raid HBA controllers? If so, what's the best way to set this all up? Present each disk to the Windows/StarWind VM and create a software RAID volume?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
mpaska
Posts: 3
Joined: Fri Dec 26, 2014 2:36 am

Fri Dec 26, 2014 9:00 am

Interesting. Most software defined SAN solutions (Good example: EMC's ScaleIO, VMware VSAN) these days are moving away from the hardware raid requirement and are dealing directly with individual disks and handling data protection at a software defined layer, thus being completely hardware independent. It's even becoming rarer to find external HBA's with RAID support, so the industry is shifting.

So you absolutely wouldn't recommend pass-thru of disks via VT-d/VMware Raw-Device Mapped (RDM) disks, building a software RAID on-top of that via Storage Spaces and presenting that to StarWind?

Is moving away from requiring RAID all-together to a similar setup as ScaleIO/VSAN on the roadmap?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Dec 28, 2014 9:47 pm

It's not a flaw-by-design or missing feature as you think now, it's just a different set of concepts StarWind is built with. Let me clarify on that :)

EMC ScaleIO and VMware Virtual SAN both:

1) Do not use RAM-based caches (well, nearly)

2) Distribute particular VM data between multiple nodes in a cluster, VM content is not guaranteed to be integral (any given point of time there can be nodes with a PARTIAL VM content)

3) Require you to use at least 10 GbE networking between nodes (Microsoft Storage Spaces Shared Nothing will ask you for 40 GbE BTW).

4) Don't have preferred data path and don't care where actual block is located - on the same node where VM runs now or on the other hypervisor node on the network: 10 GbE networking is MUCH faster then spinning disk and MOST of the flash used as a cache (EMC and VMware products cannot do all-flash configs effectively so far, there still would be flash write buffer and flash storage tiers being physically SEPARATED)

5) Cluster of N nodes can survive 1 (2-way replication) or 2 (3-way replication) cluster nodes failure before going down completely

StarWind from the other side:

1) DOES have aggressive RAM-based write-back caches --> PERFORMANCE

2) VM content is always integral --> EASY TO RECOVER

3) HAS flexed out networking requirement of a 1 GbE --> CHEAPER ENTRY TO HYPER-CONVERGED GAME

4) HAS preferred data path because of 1) and 3), even with 10 GbE connectivity RAM-based cache is always faster then 10 GbE RMDA read/write from other node on the network (in case of 1 GbE it's always true) --> WE DO LIKE TO HAVE VM DATA AND ACTUAL RUNNING VM ON THE SAME HYPERVISOR NODE

5) DOES NOT distribute data between many nodes (replication Vs. erasure coding) --> WHILE ERASURE CODING IS MORE EFFICIENT FOR MOST CASES REPLICATION ALLOWS US TO HAVE ALWAYS INTEGRAL VM DATA, CLUSTER KEEPS GOING WITH ONLY ONE NODE LEFT

That's why we prefer to have some redundancy even @ hypervisor level as even one node of the cluster can keep going with at least SOME workload survived (military people LOVE this, VDI admins LOVE this). We're happy even with RAID0 and 3-way replication for virtual LUNs we create.

P.S. Upcoming version of StarWind will go further with a vVols and Virtual Volumes there we'll be happy with a stand-alone disks STILL with a RAID6 configured over group of them on the local node.
mpaska wrote:Interesting. Most software defined SAN solutions (Good example: EMC's ScaleIO, VMware VSAN) these days are moving away from the hardware raid requirement and are dealing directly with individual disks and handling data protection at a software defined layer, thus being completely hardware independent. It's even becoming rarer to find external HBA's with RAID support, so the industry is shifting.

So you absolutely wouldn't recommend pass-thru of disks via VT-d/VMware Raw-Device Mapped (RDM) disks, building a software RAID on-top of that via Storage Spaces and presenting that to StarWind?

Is moving away from requiring RAID all-together to a similar setup as ScaleIO/VSAN on the roadmap?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
mpaska
Posts: 3
Joined: Fri Dec 26, 2014 2:36 am

Tue Dec 30, 2014 12:10 am

Thanks for the reply Anton. One last question.
anton (staff) wrote:It's not a flaw-by-design or missing feature as you think now, it's just a different set of concepts StarWind is built with. Let me clarify on that :)

5) DOES NOT distribute data between many nodes (replication Vs. erasure coding) --> WHILE ERASURE CODING IS MORE EFFICIENT FOR MOST CASES REPLICATION ALLOWS US TO HAVE ALWAYS INTEGRAL VM DATA, CLUSTER KEEPS GOING WITH ONLY ONE NODE LEFT
This is where I am still a little confused, if you take a look at the solution to scaling CPU and storage up and out at: http://www.starwindsoftware.com/scale-out-page it's not clear how this replication model can scale.

When adding more nodes, you gain more CPU/memory resources, that part makes sense. But adding more storage, you're just increasing redundancy by replicating your storage to another array. Say for example you start with 2x 20TB usable arrays in a 2-node config, you'll have 20TB of available storage. You then "scale up and out" and add a 3rd node, with 20TB usable storage, now you'll have a 3-way replicated array, but still with only 20TB of available storage. While you've increased CPU/memory in this instance, you've not gained any additional storage space.

If instead your new 3rd node has 30TB of usable storage, and you add this to your existing 2x 20TB replicated nodes what will happen then?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Dec 30, 2014 12:07 pm

Amount of StarWind nodes in a cluster is not related to a ways of replication. Key point is to create multiple LUNs cross-replicated between different nodes (at least one per node + one "partner"). Drop me an e-mail and I'll send you a Scale Out draft.
mpaska wrote:Thanks for the reply Anton. One last question.
anton (staff) wrote:It's not a flaw-by-design or missing feature as you think now, it's just a different set of concepts StarWind is built with. Let me clarify on that :)

5) DOES NOT distribute data between many nodes (replication Vs. erasure coding) --> WHILE ERASURE CODING IS MORE EFFICIENT FOR MOST CASES REPLICATION ALLOWS US TO HAVE ALWAYS INTEGRAL VM DATA, CLUSTER KEEPS GOING WITH ONLY ONE NODE LEFT
This is where I am still a little confused, if you take a look at the solution to scaling CPU and storage up and out at: http://www.starwindsoftware.com/scale-out-page it's not clear how this replication model can scale.

When adding more nodes, you gain more CPU/memory resources, that part makes sense. But adding more storage, you're just increasing redundancy by replicating your storage to another array. Say for example you start with 2x 20TB usable arrays in a 2-node config, you'll have 20TB of available storage. You then "scale up and out" and add a 3rd node, with 20TB usable storage, now you'll have a 3-way replicated array, but still with only 20TB of available storage. While you've increased CPU/memory in this instance, you've not gained any additional storage space.

If instead your new 3rd node has 30TB of usable storage, and you add this to your existing 2x 20TB replicated nodes what will happen then?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Tue Jan 06, 2015 5:37 pm

don't confuse 3 way replication for a 3 node limit in a cluster. If you have 10 nodes, iscsi target A can be replicated across nodes 1-3, then iscsi target B can be replicated across nodes 2,3,and 9, no limit to which nodes you can replicate to, just that each iscsi target can only be in synchronous replication with 3 nodes. you can then also add an async target as a backup.

VMWare scale out makes more sense, because of vmfs clustered filesystem you gain all the storage of the incoming node. With hyper-v, you split your storage on each node in half, because you want to add the node into the replication chain.
previous cluster had AB,BC,CA, incoming node would make it AB,BC,CD,DA, so you gain half the incoming storage into the cluster....make sense?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jan 06, 2015 5:41 pm

We'll take one step forward and go to VM = vLUN (vVol or Virtual Volume depending on hypervisor).
barrysmoke wrote:don't confuse 3 way replication for a 3 node limit in a cluster. If you have 10 nodes, iscsi target A can be replicated across nodes 1-3, then iscsi target B can be replicated across nodes 2,3,and 9, no limit to which nodes you can replicate to, just that each iscsi target can only be in synchronous replication with 3 nodes. you can then also add an async target as a backup.

VMWare scale out makes more sense, because of vmfs clustered filesystem you gain all the storage of the incoming node. With hyper-v, you split your storage on each node in half, because you want to add the node into the replication chain.
previous cluster had AB,BC,CA, incoming node would make it AB,BC,CD,DA, so you gain half the incoming storage into the cluster....make sense?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
ruthwhite
Posts: 1
Joined: Wed Jul 10, 2024 9:40 am

Wed Jul 10, 2024 9:59 am

The migration away from ZFS raises questions about the support for RAIDs.
Can't wait to interact with all of you!
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Sep 12, 2024 6:08 pm

StarWind fully supports layering its data containers on top of the RAID controller created and managed virtual LUNs.
ruthwhite wrote:
Wed Jul 10, 2024 9:59 am
The migration away from ZFS raises questions about the support for non-RAID HBAs (Host Bus Adapters). It's crucial to consider compatibility and performance implications transitioning storage solutions. Understanding whether non-RAID HBAs are supported is pivotal for maintaining system integrity and optimizing storage configurations. Exploring these aspects ensures a seamless migration process tailored to specific needs and hardware capabilities.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply