Page 1 of 1

Too cache or not ...

Posted: Mon Mar 03, 2014 6:56 pm
by AKnowles
Here's some interesting statistics using a SSD (disk bridge) ...

IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS

Samsung 840 Pro Disk Bridge - RR - Starwind RC1

Max_IOPS
Sum 31452.36 31399.27 53.09 41.88 15.33 26.55

Max_Throughput
Sum 888.76 835.67 53.09 444.39 417.84 26.55

Max_Write_IOPS
Sum 49668.82 0.00 49668.82 50.77 0.00 50.77

Max_Write_Throughput
Sum 916.44 0.00 916.44 458.23 0.00 458.23


Samsung 840 Pro Disk Bridge - RR 4096 Cache- Starwind RC1

Max_IOPs
Sum 23920.89 23867.80 53.09 38.21 11.66 26.55

Max_ThroughPut
Sum 881.58 828.49 53.09 440.79 414.24 26.55

Max_Write_IOPS
Sum 59136.26 0.00 59136.26 55.41 0.00 55.41

Max_Write_Throughput
Sum 668.36 0.00 668.36 334.18 0.00 334.18

Seems like a memory cache decreased random read, but does help random writes. Sequential transfers seem to be relative unaffected.

Re: Too cache or not ...

Posted: Mon Mar 03, 2014 7:05 pm
by AKnowles
These numbers point out an interesting - well to me anyway - fact that even a single NIC is enough bandwidth to pretty much max out a 6GB SATA SSD random IO potential.

Samsung 840 Pro Disk Bridge - Fixed - Starwind RC1

Max_IOPs
Sum 32233.55 32233.55 0.00 15.74 15.74 0.00

Max_ThroughPut
Sum 216.79 216.79 0.00 108.38 108.38 0.00

Max_Write_IOPS
Sum 47016.15 0.00 47016.15 22.95 0.00 22.95

Max_Write_Throughput
Sum 216.79 216.79 0.00 108.38 108.38 0.00

Re: Too cache or not ...

Posted: Mon Mar 03, 2014 9:20 pm
by anton (staff)
Test you use does not map to "cache hit" situation so is pretty much useless as data is NEVER is cache so it just brings processing overhead so that's why random reads are slower. At the same time it DOES buffer writes also predicts incremental access so that's why writes and sequential reads are faster. You need to test cache with a real world scenario and not with a synthetic test where address range is different all the time.

Re: Too cache or not ...

Posted: Tue Mar 04, 2014 4:57 pm
by AKnowles
I do agree that a real world test would be better. And I could run the test repeatedly or try a different type of test where the data would be known to be in the cache, but I was curious how much impact a RAM cache would have vs. no cache with an SSD. No RAM cache - well none of mine anyway - will ever be large enough to encompass the entire SSD so there will always be cache misses. This is just an extreme example of the cost of a cache miss.

One thing that would help a lot, particularly in the real world are statistics for the cache(s). Without statistics it will always be difficult to determine how well the cache is working and almost impossible to size it properly for maximum performance.

Re: Too cache or not ...

Posted: Tue Mar 04, 2014 5:17 pm
by anton (staff)
We're working on a set of Intel I/O Meter scripts you could run to get fixed (10%, 25%, 50% and 75% etc) "cache hit" ratios so real performance could be tested.

Impact on 100% cache miss is 5-10% compared to a non-cached config.

RAM cache does another job except speed up - it "adsorbs" writes and prolongs cache (esp. MLC) life.

We're working on a stats. Thank you for pointing!
AKnowles wrote:I do agree that a real world test would be better. And I could run the test repeatedly or try a different type of test where the data would be known to be in the cache, but I was curious how much impact a RAM cache would have vs. no cache with an SSD. No RAM cache - well none of mine anyway - will ever be large enough to encompass the entire SSD so there will always be cache misses. This is just an extreme example of the cost of a cache miss.

One thing that would help a lot, particularly in the real world are statistics for the cache(s). Without statistics it will always be difficult to determine how well the cache is working and almost impossible to size it properly for maximum performance.

Re: Too cache or not ...

Posted: Mon Jul 13, 2015 11:32 pm
by Branin
anton (staff) wrote:We're working on a set of Intel I/O Meter scripts you could run to get fixed (10%, 25%, 50% and 75% etc) "cache hit" ratios so real performance could be tested.

Impact on 100% cache miss is 5-10% compared to a non-cached config.

RAM cache does another job except speed up - it "adsorbs" writes and prolongs cache (esp. MLC) life.

We're working on a stats. Thank you for pointing!
AKnowles wrote:I do agree that a real world test would be better. And I could run the test repeatedly or try a different type of test where the data would be known to be in the cache, but I was curious how much impact a RAM cache would have vs. no cache with an SSD. No RAM cache - well none of mine anyway - will ever be large enough to encompass the entire SSD so there will always be cache misses. This is just an extreme example of the cost of a cache miss.

One thing that would help a lot, particularly in the real world are statistics for the cache(s). Without statistics it will always be difficult to determine how well the cache is working and almost impossible to size it properly for maximum performance.
Just curious if there are statistics available anywhere in the product (or PerfMon) for things such as cache hit ratios, etc..., or if that is still on the roadmap.

Thanks.

Branin

Re: Too cache or not ...

Posted: Fri Jul 31, 2015 10:45 am
by Alex (staff)
Just curious if there are statistics available anywhere in the product (or PerfMon) for things such as cache hit ratios, etc..., or if that is still on the roadmap.
We are working on this functionality right now, it is planned for release on September.
Counters are:
- Cache usage percent (used cache pages to cache size).
- Cache hits percent for read operations.
- Overall device IO operations to underlying disk operations ratio.