As Anton said, it is a sandwich of L1 cache, L2 cache, storage device.
Write-back mode:
On write operation data is stored in cache storage (memory for L1 or SSD for L2), and client gets immediate "OK" answer. If cache is full, the coldest blocks are flushed on disk to free cache space.
On read operation cache is checked for the data. If data is not present in cache, service reads this data, sends to client and updates cache with this data.
Write-trough mode:
Write operation goes to disk. If affected address are in cache already, cache data is updated too.
Read operation is processed as in first case: If data resides in cache, service returns this data to client. If it is not present in cache, service reads this data, sends to client and updates cache with the data.
When both L1 and L2 caches are configured for the device, L1 cache works on the top of sandwich, and interacts with L2 cache as with disk. L2 cache gets read/write requests from L1 cache and sends requests to underlying disk storage.
So in case of L2 SSD cache in write-thru mode and L1 RAM-based cache in write-back mode, client requests are processed as follows:
Writes:
Data goes to RAM of L1 cache. If cache is full, the coldest data blocks are moved to L2 cache, overwriting coldest blocks of L2 cache on SSD.
Reads:
We are checking for the data L1 cache. On cache miss in RAM, L2 cache is checked. On miss in SSD storage, data is being red form underlying disk and returned to client. Both caches are updated with data from disk.
When you define write-through/write-back on the L2 cache, what are you effecting?
I suspect it's not L2 cache -> hard disk image as that would kind of defeat the purpose of using SSD L2 caching if a write to L2 cache also has to write-through to the hard disk.
This setting affects the manner how L2 layer interacts with disk.
Write-through mode doesn't slow down disk writes, as we are using No Write Allocate policy for processing write misses.