Designing a new system. Need some input :)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Feb 06, 2016 4:36 pm

There should be sort of log with flat storage (image files, LSFS are immune to this issue) to prevent us from not knowing what node has most recent data, that's referred as "amnesia issue". See:

https://docs.oracle.com/cd/E19575-01/82 ... index.html

You can absolutely use flash cards as L2 cache.

To avoid read-modify-write issue you have to use LSFS on top of parity and double parity RAIDs. Caches have very small amount of transactional capacity to absorb writes. PS Log we'll be adding to flat storage soon should help as well (sort of ZIL equivalent from ZFS). Again, LSFS is fully log-structures so is 100% immune to this issue.
dwright1542 wrote:I still don't understand how the L2 cache in WB is any more susceptible to data corruption than the WB ram cache on a regular store. If they are both mirrored on both nodes, where's the issue? Maybe one of the SW guys can explain that, since I'm still hoping I can use these FusionIO cards to help with write issues on RAID6 at some point.

-D
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
dwright1542
Posts: 19
Joined: Thu May 07, 2015 1:58 am

Wed Feb 10, 2016 2:38 am

anton (staff) wrote:There should be sort of log with flat storage (image files, LSFS are immune to this issue) to prevent us from not knowing what node has most recent data, that's referred as "amnesia issue". See:

https://docs.oracle.com/cd/E19575-01/82 ... index.html

You can absolutely use flash cards as L2 cache.

To avoid read-modify-write issue you have to use LSFS on top of parity and double parity RAIDs. Caches have very small amount of transactional capacity to absorb writes. PS Log we'll be adding to flat storage soon should help as well (sort of ZIL equivalent from ZFS). Again, LSFS is fully log-structures so is 100% immune to this issue.
dwright1542 wrote:I still don't understand how the L2 cache in WB is any more susceptible to data corruption than the WB ram cache on a regular store. If they are both mirrored on both nodes, where's the issue? Maybe one of the SW guys can explain that, since I'm still hoping I can use these FusionIO cards to help with write issues on RAID6 at some point.

-D
I thought no L2 write cache on SSD's?
I'm still trying to understand the difference between writing to SSDs as L2 cache, and the RAM cache for drives, -or- just using the SSD's as a drive themselves. I get the split-brain and amnesia issues (a quorum server would fix that), but isn't that the same thing with any 2 node device?
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Feb 10, 2016 11:44 pm

There is no big deal to use SSDs or NVMe cards for L2 caching. Furthermore, the write-back mode for L2 caching was there a couple of starwind builds ago. In the current build only write-through (read optimize) L2 cache is available, but starwind guys told somewhere on the forum that this feature is just being revamped now and will be available again very soon.
User avatar
Tarass (Staff)
Staff
Posts: 113
Joined: Mon Oct 06, 2014 10:40 am

Fri Feb 12, 2016 3:11 pm

Exactly, darklight.

WB on L2 is coming soon. We are all waiting for it :) Stay tuned!
Senior Technical Support Engineer
StarWind Software Inc.
Post Reply