* Total cache size is the minimum of:
- 1/1k of system RAM.
- 1/64k of backing device.
- 32 * 2 * CPUs.
The latter ensures that we have at least 2*CPUs lbd objects
which is required for the top level compress operations.
* Implement LRU for all caches.
* Track pblk full and last allocated block for better performance.
Still could use better parsing for cache_pages parameter and more
granularity, but this is workable for now.
* New args to cbd format.
* HAVE_* -> COMPRESS_HAVE_*.
* Verify header params algorithm and compression.
* Move percpu alloc to lbdcache.
* Alloc percpu data in lbdcache_ctr, free in lbdcache_dtr.
This seems to work except for I/O timing. Reads are sync and
writes are async, so this sequence fails:
- Object x flushes
- Object x is reused as y
- Another object is taken as x
- New object x reads
-> Stale data is read
Two potential solutions from here:
1. Implement async reads.
2. Hold ref to object until write completes.
(1) is complicated, but more correct. Writes may stay in buffers for
quite some time (typically 5 seconds), during which time the dm-compress
object cannot be released.