Designing a new system. Need some input :)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Tue Nov 17, 2015 10:43 pm

Hi there,

I have two old starwind enclosures that I am going to upgrade.

I am thinking putting in 1 x Intel x710, 4 Ports 10Gb to each server. 2 ports for direct connect sync, and 2 ports to iSCSI MPIO or 1 x iSCSI, 1x WAN.

Basically I have two Areca 1680xl raid card that's powering 48 x 2TB SATA disk before.

Now I want to free up 8 drives and change them into SSD, I am thinking about Samsung 850 Pro 1TB.

I am thinking about install the new samsung 950 pro 512GB m2 for L2 caching for the SATA drives.

What do you guys think, any suggestion? Is Samsung 850 Pro good enough? Should I do raid 10 on them, or JBOD, if JBOD, I might go up to 2TB each, but goes down to 4 drives.

What do you think is the best configuration for me ? Thanks.
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Nov 18, 2015 10:59 am

RAID10 if possible to sleep well at night :)

I have 4 x Samsung 850 EVO in RAID10. Using them as an HA device for performance critical VM's and for pagefiles. Works like a charm.
User avatar
Tarass (Staff)
Staff
Posts: 113
Joined: Mon Oct 06, 2014 10:40 am

Mon Nov 23, 2015 11:01 am

More information about planning your system according to our best practices can be found here:
https://www.starwindsoftware.com/starwi ... ces-manual
Senior Technical Support Engineer
StarWind Software Inc.
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Sun Dec 20, 2015 5:12 am

RAID 10 eats up so much space... but I suppose I can see..

Tarass, thanks for the link, but it's so old, 2013 :(
User avatar
Tarass (Staff)
Staff
Posts: 113
Joined: Mon Oct 06, 2014 10:40 am

Mon Dec 21, 2015 4:08 pm

Pretty old, but still valid ;-)

We will update it soon.
Senior Technical Support Engineer
StarWind Software Inc.
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Sat Jan 16, 2016 7:01 am

If you are only using the SSD as a Tier 2 cache, is there actually a benefit to having RAID? I realize that RAID will help reliability and RAID10 is generally regarded to help performance. But in the event that you don't have the SSDs in RAID and are using them as a Tier 2 cache, what is the worst-case scenario if 1 SSD fails? Will you actually lose data? I guess that depends on whether you are using them as write-back cache (read & write cache) or write-through (read-only cache), correct? If the former, then I presume you would potentially lose data (i.e. encounter data corruption) if the SSD fails. But if the latter, you wouldn't actually lose anything, correct?

I'm a total newby to StarWind and am making some guesses and assumptions here. I would really appreciate if someone with more experience will answer and explain how my comments and correct or incorrect. This will certainly help me gain a greater understanding of StarWind.

Thanks!


...by the way. 24 x 2TB is a LOT of storage. How do you have that laid out (in terms of RAID)? Also, what kind of performance are you getting? I'm very interested in this. Thanks!
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
lxlive
Posts: 4
Joined: Thu Dec 17, 2015 9:54 am

Mon Jan 18, 2016 9:24 pm

Hello impaladog,

In case you are using L2 cache in write-back mode, you might lose data which is stored in cache. I don't recommend you using L2 cache in write-back mode. Write-through mode will work very good for you. Write-through mode will boost your reads and in case of failure you won't lose data. As for RAID, I think, for data redundancy, I would have used it.
User avatar
Tarass (Staff)
Staff
Posts: 113
Joined: Mon Oct 06, 2014 10:40 am

Wed Jan 27, 2016 2:39 pm

Valid points, lxlive, thank you for contribution!
Senior Technical Support Engineer
StarWind Software Inc.
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Wed Jan 27, 2016 3:16 pm

So it is less risky to do L2 cache in write-back mode if your SSDs are mirrored in a RAID1. Correct?
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Wed Jan 27, 2016 10:16 pm

Samsung 840s here.. No problems whatsoever.

I use LSI's Cachecade, so I have no input regarding Starwind's L2 cache.

Best of luck!
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Wed Jan 27, 2016 10:27 pm

epalombizio wrote:Samsung 840s here.. No problems whatsoever.

I use LSI's Cachecade, so I have no input regarding Starwind's L2 cache.

Best of luck!
My experience with LSI's Cachecade is that it is strictly write-through (read-only cache, as opposed to read/write). I appreciate your response, but I am really seeking this info regarding the L2 (SSD) cache provided by StarWind VirtualSAN.

However, I am very curious as to why you chose to let the LSI controller do the L2 cache, rather than StarWind VirtualSAN. Any benefits you see to one over the other?

Thanks!
-
Doug
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Wed Jan 27, 2016 10:35 pm

L2 cache in Starwind is defined on a per-LUN basis. It's my understanding that they're researching the ability to have an L2 cache that all devices can share.. Not a deal breaker for most people, but I don't have a lot of space on my SSDs to carve out for L2 cache for individual devices. I'm waiting until I have the ability to share the cache so I can evaluate it against Cachecade.

Cachecade can be used in Write-back mode as well as Write-Through. I haven't performed extensive testing to ensure that Write-back mode is working as I would expect..

I'm building out a new converged cluster in a week or so, so I'll have the equipment to test with before moving it into Production.

Best of luck!
User avatar
impaladoug
Posts: 15
Joined: Sun May 24, 2015 3:24 am
Location: Farmington, NM
Contact:

Wed Jan 27, 2016 10:48 pm

epalombizio wrote:L2 cache in Starwind is defined on a per-LUN basis. It's my understanding that they're researching the ability to have an L2 cache that all devices can share.. Not a deal breaker for most people, but I don't have a lot of space on my SSDs to carve out for L2 cache for individual devices. I'm waiting until I have the ability to share the cache so I can evaluate it against Cachecade.

Cachecade can be used in Write-back mode as well as Write-Through. I haven't performed extensive testing to ensure that Write-back mode is working as I would expect..

I'm building out a new converged cluster in a week or so, so I'll have the equipment to test with before moving it into Production.

Best of luck!
Makes sense. Thanks for sharing. FYI my only experience w/ Cachecade has been on Dell PowerEdge servers (PERC as a rebranded LSI). On those Dell doesn't allow write-back. ;-)

Understanding the per-LUN cache configuration, your decision to run it on the controller makes perfect sense to me.

Thanks for sharing!
-
CCNA, MCSA, Security+, Network+, Linux+, Server+, A+, MCP, VSP
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Feb 03, 2016 4:22 pm

I have Cachecade :-) It works. In write-back.

These servers are not highly loaded, though, just a small branch office with 10-15 virtual machines. Therefore, I didn't do any tests on them.
dwright1542
Posts: 19
Joined: Thu May 07, 2015 1:58 am

Fri Feb 05, 2016 9:50 pm

I still don't understand how the L2 cache in WB is any more susceptible to data corruption than the WB ram cache on a regular store. If they are both mirrored on both nodes, where's the issue? Maybe one of the SW guys can explain that, since I'm still hoping I can use these FusionIO cards to help with write issues on RAID6 at some point.

-D
Post Reply