Setup Question

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Jan 22, 2015 5:10 pm

Having read a variety of documents about how to set Starwind up I have got to the stage where I can install the software on 2 nodes, create targets and create devices in those targets. The devices on the two nodes appear to be completely independent of each other though. I thought the whole point was that I could have a virtual SAN, so that data would be synchronised between the two nodes while appearing to Windows as one shared storage area. I don't seem to be able to get replication to work.

Do I have it right? I take two servers, install Starwind. Then I can create virtual drives via Starwind on each of those servers and have Starwind present them as a shared storage area, with Starwind synchronising the data between them.

So far I haven't been able to find notes which properly explain how to do that.

I'm hoping someone can point me in the direction of an explanation and I've just missed the point somehow.

Thank you.

(EDIT - I answered my own question, still haven't found this adequately explained in the documentation though - anyone?)
Last edited by D9BDCEA2CE on Fri Jan 23, 2015 9:55 am, edited 2 times in total.
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Jan 22, 2015 5:23 pm

Isn't it always the way? I think giving up a bit lets you relax and look at it more clearly.

Almost as soon as I'd finished writing that I realised I hadn't tried creating a cluster. None of the notes I'd read had mentioned it.

I created a cluster, added the two servers to it, then created a device in that cluster which the software promptly synchronised between the two servers for me, exactly as I'd hoped. Now to see if Windows is happy with the storage for Failover Clustering...
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Jan 22, 2015 5:43 pm

And there it is. Once I created a volume in the new space and brought it online on both members of the Failover Cluster the Starwind drive is available to be added as shared storage in the Failover Cluster manager.

One final step - Validate Configuration.
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Jan 22, 2015 5:56 pm

The Validation Wizard is happy. Well, thanks for listening, maybe this will help someone else in future.

I'd still like to see notes on this to make sure I haven't missed something, if you know where they are let me know.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Jan 26, 2015 2:55 pm

Hi,
Sorry to hear you couldn't find the necessary documentation right away.
Here is the guide for a hyper-converged deployment
http://www.starwindsoftware.com/starwin ... -v-cluster
Here is the guide for a compute and storage separated deployment with storage served as SMB3 share:
http://www.starwindsoftware.com/compute ... th-hyper-v
Please let me know if this helps.
Max Kolomyeytsev
StarWind Software
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Fri Feb 06, 2015 2:56 pm

I'm just impatient to get it working, since my boss is impatient to have it working... always the same.

I have since (with Anatoly Vilchinsky's help) discovered http://www.starwindsoftware.com/starwin ... -v-cluster and I'm making good progress. I have been able to create the HA storage I'm after, now I'm playing with the iSCSI bit. It's not exactly user-friendly.

I do have a new question:

On page 5 of the document I've linked to it describes how "both Sync and iSCSI Data use their own vLANs within 2 crossover connections between the servers." It seems to me (unless I'm missing something) that for vLANs there would have to be a switch but the diagram doesn't show one, unless what it means is that the two crossover cables have two IP addresses assigned to the interfaces at each end?
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Wed Feb 11, 2015 12:26 pm

Hi! Thanks for the reply!
I do have a new question:

On page 5 of the document I've linked to it describes how "both Sync and iSCSI Data use their own vLANs within 2 crossover connections between the servers." It seems to me (unless I'm missing something) that for vLANs there would have to be a switch but the diagram doesn't show one, unless what it means is that the two crossover cables have two IP addresses assigned to the interfaces at each end?
Yep, you are correct. Looks like the typo. We`ll edit it soon. Thanks for figuring this out!
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Feb 12, 2015 2:49 pm

As I progress through setting up the test version (I've only had to start over twice so I'm not doing too bad) I've come across another question.

Page 13 of the Hyper-Converged 2 Node document says "It is recommended to create at least one CSV volume per cluster node" - so in the two node scenario I need 2 x HAImage and the witness? Why is that? I'd rather have one big space instead of breaking up my storage.
transparent
Posts: 11
Joined: Mon Jun 02, 2014 9:21 pm

Thu Feb 12, 2015 8:15 pm

If your running Hyper-V on the same node that your running Starwind, you can take advantage of local loopback acceleration to access that local CSV directly, and set your MPIO policy to failover to the other node. You'd do the same on the parnter node, that way they run local workloads on local disks CSV bypassing TCP stack, with having the ability to failover to the other node.

If your Starwind SANs are on seperate dedicated machines from your Hyper-V hosts, it doesn't matter as much and you can create one space and use Round Robin as the MPIO policy to balance requests across the cluster.

Hope that makes sense.
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Mon Feb 16, 2015 10:50 am

So they're mostly working on local physical discs - it's beginning to make sense, yes. I'm almost feeling brave enough to move out of the test environment... :)
Attachments
My interpretation so far...
My interpretation so far...
Starwind.png (178.94 KiB) Viewed 7613 times
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Mon Feb 16, 2015 3:50 pm

I seem to have everything working now apart from one warning in the validation report which seems, from what I'm reading, to be safe to go ahead with. I suspect I'll need one more direct link between the two nodes for Live Migration traffic.

Warning:

Node Server2.domain.com and Node Server1.domain.com are connected by one or more communication paths that use disabled networks. These paths will not be used for cluster communication and will be ignored. This is because interfaces on these networks are connected to an iSCSI target. Consider adding additional networks to the cluster, or change the role of one or more cluster networks after the cluster has been created to ensure redundancy of cluster communication.

The communication path between network interface Server1.domain.com - 110-210 and network interface Server2.domain.com - 110-210 is on a disabled network.
The communication path between network interface Server1.domain.com - 111-211 and network interface Server2.domain.com - 111-211 is on a disabled network.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Feb 20, 2015 12:43 pm

Well, that is just a warning, so cluster will live with that. Good luck with moving into production! :)
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
D9BDCEA2CE
Posts: 25
Joined: Tue Dec 09, 2014 1:51 pm

Thu Mar 05, 2015 3:52 pm

Failover Tests work exactly as the instructions say they should (https://technet.microsoft.com/en-gb/lib ... 63389.aspx) and everything appears to be happy...

Thanks for your help folks.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Mar 06, 2015 3:51 pm

Thanks for the heads up!
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply