Building W2008R2SP1 Hyper-V HA cluster (1.)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
petr
Posts: 13
Joined: Sun Feb 27, 2011 2:55 pm

Sun Apr 24, 2011 9:09 am

Hi,

as I wrote we are planning HA cluster running on Hyper-V. Currently we have (or we are buying):

(per storage node)
+ 16x 1 TB WD RE
+ Adaptec 6805
+ 32 GB RAM ECC DDR3 per
+ Intel X3440 (8 CPU 2x4HT)

Now we have 6 32 GB RAM Hyper-V nodes with no HA. I want to move them into HA and there are my questions.

1.) In every non-HA server we have RAID 10 (4x1 TB HDD). On new storage server I would like to have 16HDD RAID 10 which will be faster. But what is recommendations about creating vdisks? It is better to have one large (and more faster) RAID 10 array with 16 drives or more physical RAID 10 arrays and vdisks on them? I think that 16 disk RAID10 array divided into small vdisk groups per server is better? OR to have Live Migration working do I need to create one big 4 TB vdisk on 16 disk RAID 10 array?

2.) How fast should be sync channel? Is one Gbps link fast enough?

3.) More questions will be posted very soon O:-)
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Wed Apr 27, 2011 12:53 pm

1)You can use one large array to store *img and other StarWind related files, but we recommend to create few smaller targets for the client connection instead of one large.
2)Actually you could calculate it by yourself - if you`ll have storage for 4TB and 1Gbps link then full synchronization in ideal condition will take 8 hours 53 minutes 20 seconds (simple calculating 4TB/1Gbps). I think 10 Gbps NICs will be more appropriate for you ;)
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
hixont
Posts: 25
Joined: Fri Jun 25, 2010 9:12 pm

Wed Apr 27, 2011 5:36 pm

The configuration I use in my data center are two Proliant servers with 144GB of RAM with three Raid Volumes (mandated by my raid controller setup). My largest partition is Raid 50 (25 drives and three spares) which gives me 9 TB of space. The servers connect to the prodduction network using 1GB connections (the fastest my switches will support), but my synchronization channel is over a direct connected pair of 10GB ports configured as a NIC team.

The Hyper-V cluster consists of four 64 GB RAM Hyper-V hosts that use three 2TB drive images in a CSV configuration. The three 2TB HA iSCSI targets are all on the 9TB Raid 50 partition.

It's not the most efficient configuration in terms of speed, but performance is not the primary consideration in my environment. The smaller iSCSI target images were a deliberate attempt to minimize the time required to complete a Full Sync.
Last edited by hixont on Thu Apr 28, 2011 5:01 pm, edited 1 time in total.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Apr 27, 2011 8:41 pm

Such a configuration should just scream...
hixont wrote:The configuration I use in my data center are two Proliant servers with 144GB of RAM with three Raid Volumes (mandated by my raid controller setup. My largest partition is Raid 50 (25 drives and three spares) which gives me 9 TB of space. The servers connect to the prodduction network using 1GB connections (the fastest my switches will support), but my synchronization channel is over a direct connected pair of 10GB ports configured as a NIC team.

The Hyper-V cluster consists of four 64 GB RAM Hyper-V hosts that use three 2TB drive images in a CSV configuration. The three 2TB HA iSCSI targets are all on the 9TB Raid 50 partition.

It's not the most efficient configuration in terms of speed, but performance is not the primary consideration in my environment. The smaller iSCSI target images were a deliberate attempt to minimize the time required to complete a Full Sync.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
hixont
Posts: 25
Joined: Fri Jun 25, 2010 9:12 pm

Thu Apr 28, 2011 5:00 pm

Speedwise I, and more importantly my user community, have no complaints. More important to me was the SAN staying online and in sync during the production switch stack reconfig that didn't go as smoothly as expected this morning. The Hyper-V cluster hosts noticed and brought VMs down automatically, but recovered once the stack came back online. The Starwind SAN servers didn't even blink.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu May 05, 2011 8:56 pm

Honestly speaking SAN provider software should be reliable **and** fast at the same time. As investing big $$$ into doubled/tripled hardware without doubling/tripling performance does not sound like a good investment.
hixont wrote:Speedwise I, and more importantly my user community, have no complaints. More important to me was the SAN staying online and in sync during the production switch stack reconfig that didn't go as smoothly as expected this morning. The Hyper-V cluster hosts noticed and brought VMs down automatically, but recovered once the stack came back online. The Starwind SAN servers didn't even blink.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply