Migrating current to new

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
bkdilse
Posts: 52
Joined: Mon Sep 10, 2018 6:14 am

Wed Sep 26, 2018 2:09 pm

Hi,

I have been trialling Starwind VSAN Free for nearly 1 month now, and have learnt a lot through trial and error, articles, and good advise from support.

I know have an NFR License, as this is my Dev environment. I know how I want this to work for me, so I just wanted run this migration by you, incase there is anything I might have missed, which could caused a massive headache:

Current setup on each of the 2 Nodes:
1 x 6TB (no RAID)
1 x 6TB (no RAID)
Multiple devices setup (1TB - 2TB in size, total 6) with 10% L1 Cache in write-back mode.

New Setup on each of the 2 Nodes:
4 x 6TB (RAID10)
Multiple devices setup (2TB in size, total 4 to start with) with 10% L1 Cache in write-back mode.

Migration Plan:
1. Confirm all devices are in sync.
2. Stop the Starwind Service on Node 1.
3. Shutdown Node 1, and install additional drives, and setup as RAID10.
4. Powerup Node 1, create new devices, targets, etc. /// or Reverse the replication (see Q3 below)
5. Attach Hyper-V/ESXi Hosts to new targets, and perform VM Storage Migrations. /// may not be required (see Q3 below)
6. Shutdown Node 2, and install additional drives, and setup as RAID10.
7. Powerup Node 2, setup replica bits from Node 1, and allow replication to complete.
8. Attach Hyper-V/ESXi Hosts to new targets. /// may not be required (see Q3 below)
9. All works ok.

Questions:
1. Anything I've missed here?
2. I have 32GB RAM in each node, would be a good idea to go for 20% L1 Cache?
3. Should I replicate in reverse once Node 1 is ready with new RAID10, then do the same again, when i've configured RAID10 on Node 2, which saves me migrating VMs?
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Fri Sep 28, 2018 4:55 pm

Hi bkdilse,
Current setup on each of the 2 Nodes:
1 x 6TB (no RAID)
1 x 6TB (no RAID)
Multiple devices setup (1TB - 2TB in size, total 6) with 10% L1 Cache in write-back mode.

New Setup on each of the 2 Nodes:
4 x 6TB (RAID10)
Multiple devices setup (2TB in size, total 4 to start with) with 10% L1 Cache in write-back mode.
I guess you made some mistake in the size of L1, RAM cache. The requirements for L1 is 1 GB of RAM cache per 1 TB of storage.
In general, the steps are correct.
Since you are planning to create the RAID array on additional drives, you can add drives to the nodes, create the RAID arrays, create small StarWind devices, extend to the size needed, attach Hyper-V/ESXi Hosts to new targets and perform VM Storage Migrations.
bkdilse
Posts: 52
Joined: Mon Sep 10, 2018 6:14 am

Sat Sep 29, 2018 8:23 am

Yes, % is wrong. I'm going with the recommended 1Gb per 1TB.
Last edited by bkdilse on Sat Sep 29, 2018 2:57 pm, edited 1 time in total.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Sat Sep 29, 2018 2:52 pm

Thank you for clarification.
bkdilse
Posts: 52
Joined: Mon Sep 10, 2018 6:14 am

Thu Oct 04, 2018 8:54 am

Just an update, this is going well.

Rebuilt 1st Node on RAID10, migrated VM's to new devices (as I wanted to move from 3 devices to 2).
Rebuild 2nd Note on RAID10, currently replicating.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Thu Oct 04, 2018 9:09 am

Thank you for keeping community updated.
Post Reply