New Guy Here

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
tuscani
Posts: 2
Joined: Thu Nov 29, 2012 7:28 pm

Thu Nov 29, 2012 7:42 pm

Hey all, what a great community! I am new to the site and am looking to evaluate the product. A little about us.. we are a hosting provider that is looking to switch from VMware to Hyper-V 3.0 running on "Native SAN for Hyper-V".

Below is the hardware we have in-place and to work with.. I could buy additional drives if needed. But if I have to buy more servers to run StarWind software the cost savings starts to dwindle. You will note the local storage solution is enticing since we have thousands of GBs of wasted space currently on our M3s.. each has a 2TB RAID5 just for an ESXi install.

PROD:

Qty 5 - IBM 3650 M3 System X (Dual x5650 hexacore procs, 48gb RAM, 2TB Local RAID5)
Qty 75 VMs on ESXi 4.1 (moving to Server 2012 Hyper-V 3.0)
9TB RAID5 IBM 2GB FC SAN (we are only using 6TB)

DEV:

Qty 4 - IBM 3650 System X (Dual E5420 quadcore procs, 24gb RAM, 600GB Local RAID5)
Qty 12 VMs on ESXi 4.1 (moving to Server 2012 Hyper-V 3.0)
2TB FreeNas running on a 5th 3650

Each server has dual port embedded and a single quad port card. None of which is 10GbE.

The network itself is fairly simple.. we have two primary Cisco 2960-S switches in our data center.. my concern with iSCSI however is that these are technically access layer switches and not distribution layer so I wouldn't want to over saturate with iSCSI traffic.

My questions are..

1. Do you see StarWind as a good solution? Why or why not?
2. Is 10GbE recommended and\or required? If so, where exactly? Meaning on my HA cluster that will host the VMs, or just the storage servers? Both?
3. Again, I am a little worried about the switches; however, I would likely put the SAN in a four or six port etherchannel on it's own VLAN. Or is dedicated switches recommended?

Thanks much!
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Dec 03, 2012 9:25 am

Welcome on board Sir! 8)
1.Do you see StarWind as a good solution? Why or why not?
StarWind utilizes your existing hardware, so you do not need to get additional boxes. Only thing you should consider is to your hardware meet your system requirements you should have enough of resources on the machine that`ll run Hyper-V and StarWind. Everything is simple here - you are cutting down quantity and increasing quality, that results in increased flexibility and ROI.
2.Is 10GbE recommended and\or required? If so, where exactly? Meaning on my HA cluster that will host the VMs, or just the storage servers? Both?
It is recommended. Minimal requirement is 1 Gbps links. Well, basicall you need to have 10Gbps on all datalinks on those servers - SycnChannel, iSCSI traffic and Hyper-V traffic (please note that we recommend to do not have dedicated NIC for HeartBeat and to put it on all avaialble/connected datalinks instead). Well, the reason of pretty simple and obvious - the more bandwidth you`ll have the more performance you`ll get.
3.Again, I am a little worried about the switches; however, I would likely put the SAN in a four or six port etherchannel on it's own VLAN. Or is dedicated switches recommended?
in Native SAN scenarion you can avoid having switches. And I have recently participated in oen great discussion about having iSCSI traffic on VLANS, and I hope you`ll find it helpful:
http://goo.gl/4R2BP
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply