Next generation Lab Cluster - Design Considerations

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
qwertz
Posts: 36
Joined: Wed Dec 12, 2012 3:47 pm

Wed Mar 27, 2019 10:59 am

Hi there!
It´s been a while. :oops: I`ve been running a Hyperconverged Cluster based on 2012r2 and Starwind for the last 5 Years.
(And its still up!)
First of all: Thanks a lot for the great software!

Now I am building a new lab and I am looking for some design tipps :)
My old Lab lacks of RAM, and IOPS for the VM workload I`m putting onto it.
The idea is to test S2D based on Server 2019 as well as Starwind and compare the performance / efficiency.

The Hardware I`ve got together for the new lab looks like the following:
- 1x ES-16-XG, 10G Switch (only one, yes)
- 1x 24-Port D-Link Gigabit Switch, used as Access-Switch for my clients
- 2x Dell R720 with each having the following hardware in it:
--- 2x Intel Xeon E5-2650 each has 8 physical Cores, 16 logical
--- 256 GB of RAM
--- 1x Intel X710 Dual Port 10GBit-NIC
--- LSI SAS HBA 9207-8i (also got the original RAID-Cards from those Machines in the Drawer, I think those are H710 mini)
--- 2x m-Sata SSD in a RAID1 for the OS
--- 2x Samsung SSD 983 DCT 960GB NVMe sitting on PCI with u2 to pci adapter cards
--- 4x DC S4600 - 240GB S-ATA SSDs
--- 8x Seagate Constellation.2 1TB, SAS 6Gb/s Disks

In my current cluster I am using tiered Storage Spaces for my iSCSI-Target Volumes because I like the idea of having a big pool with all available Capacity from the SSDs and the HDDs and having it manage that automatically. (Based on the heatmap: frequently used stuff on ssd not so frequently used stuff on hdd)
From what I saw this is possible with S2D on Server 2019 too, in addition there is a caching feature that would enable me to use the nvme as cache for ssd and hdd.

- Is there something similar for Starwind in the meantime?
- What setup would you recommend taking into account the kind of Storage/Hardware I have if I build a new cluster with Starwind? (Like, build a Raid to store the iSCSI target volumes, or create another tiered Storage Space)
- Would you recommend to invest into some RDMA capable NICs, like the ConnectX-3 with the hardware I have? (Is the possible performance gain by that worth the money? If its only about bandwidth, i have a bunch of old Mellanox IB Cards with 20Gbps Bandwidth)

I looking forward to read from you, let me know what you think :)

Have a nice Day
PoSaP
Posts: 49
Joined: Mon Feb 29, 2016 10:42 am

Fri Mar 29, 2019 10:37 am

Hi!
I found an interesting review here:
[ ... ]
I guess you need to read this :)
Maybe I am mistaken but as I know Mellanox does not support iSER on Windows Server 2019 yet.

Edit: anton (staff) removed dead link.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Fri Mar 29, 2019 12:06 pm

Hi qwertz,
Yes, you can use RDMA capable cards starting from Mellanox X3 Pro, it can give some boost. But yes, there can be issues on Windows Server 2019.
You can use something as explained in this guide.
qwertz
Posts: 36
Joined: Wed Dec 12, 2012 3:47 pm

Fri Mar 29, 2019 1:03 pm

Hey PoSaP!
Thanks for your reply. I actually know that article already.
I did a lot of research on that topic , and I still think Starwind is a "competitor" to S2D that has to be taken very seriously.
If you read more posts in this blog you will discover that Starwind is able to achieve more or at least the same amount of iops using lesser vms. Also i think the reads and writes on the nvme drives will be much less than with s2d, resulting in a longer lifespan of those drives.
The only thing I couldn´t find so far is a good way to implement that "hybrid" archictecture Spaces offers.
I know I could use the NVMe Storage as Cache for all the devices, and either put the iSCSI Volumes on SSD or HDD... But thats kind of static and I need to decide which vm resides where, while I believe the s2d approach is more dynamic: those blocks that are used more frequently get into the fast tier and the cold blocks into the slow tier.
I am really thinking about giving starwind a test in a way where I store the iscsi target volume files on a tiered storage space. (not using s2d!)
I am doing this since 5 years in my current lab environment. Lets see how far I get this weekend with testing.
BTW: I found 2 very cheap used Oracle branded CX-3 Cards, I think ill get those later just to give it a try.
Kind regards
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Mon Apr 01, 2019 12:46 pm

Hi!
Yes, you can use StarWind on tiered storage:
https://www.starwindsoftware.com/resour ... erver-2016
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Jun 12, 2020 1:24 pm

No, they don't. But they do support NVMe-oF with their new versions of CX5 NICs. NVMe-oF is future and iSER is history.
PoSaP wrote:Hi!
I found an interesting review here:
[ ... ]
I guess you need to read this :)
Maybe I am mistaken but as I know Mellanox does not support iSER on Windows Server 2019 yet.

Edit: anton (staff) removed dead link.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Foxbat_25
Posts: 3
Joined: Tue Jun 30, 2020 2:40 pm

Tue Jun 30, 2020 7:17 pm

But to answer to the hardware question, your configuration is solid, it will perfectly be able to make what you demand of it!
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jul 01, 2020 10:34 am

Looks good. We'd love to see some final numbers.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply