Commit Graph

33 Commits

Author SHA1 Message Date
Tom Marshall 8b08b0c2c1 Make detect zeros a runtime flag 2019-11-17 10:09:58 -08:00
Tom Marshall 9f9722bb26 Validate lbat element length on read 2019-11-16 20:29:06 -08:00
Tom Marshall b169a0fbcc Add zone lazy init and do more cleanup
* Add lazy init logic to cb format.  Override with --full-init.
 * Make cbd check lazy init aware.
 * Fix up a bunch of size calculation errors.
 * Add pblk validity check in lbd_flush, lbd_read.
2019-11-16 18:39:03 +01:00
Tom Marshall 8b9b922344 Handle various open failures better 2019-11-14 19:59:40 +01:00
Tom Marshall c6fe92715c Implement mount options cache_pages and sync
Also fixup kernel arg parsing.
2019-11-14 19:12:41 +01:00
Tom Marshall be3d28a255 Handle jiffies wrap 2019-11-14 18:37:28 +01:00
Tom Marshall 6e92d5071a Fix variable pblk len, cleanup, rearrange
This is tested with the performance profile and works.

Lots of cleanup and rearranging also.
2019-11-13 15:36:33 -08:00
Tom Marshall 8229ef90bd Make pblk len variable, add format profiles, and cleanup 2019-11-12 23:49:36 +01:00
Tom Marshall e70d283921 Add stats
In /sys/fs/compress/device-name:
	lblk_size,
	pblk_used/total,
	lblk_used/total,
	pbat_r/w,
	lbatpblk_r/w,
	lbd_r/w
2019-11-12 14:45:42 +01:00
Tom Marshall ceb0eb3230 Make cbd check actually do a check and do some cleanup
* cbd check now works (ish).
 * Added full-check arg to cbd check, link with liblz4, libz.
 * Added libcbd/util.c with goodies like verbose, ask_user_bool.
 * Rework kernel side params, stats to separate from user space.
 * Lock kernel stats when updating.
 * Add lblk_used to stats.

... and probably some other forgotten things.
2019-11-11 20:48:46 +01:00
Tom Marshall d43a37531f Set error flag on lbat alloc failure 2019-11-09 20:34:29 +01:00
Tom Marshall 66eeec39f5 Add error flag to cbd_params, set it on write error 2019-11-09 17:02:53 +01:00
Tom Marshall 97eb421b58 Improve lbdcache locking
* Separate cache and flush locks.
   Note lock order: flush -> cache

 * Never flush lbd with either cache or flush lock held.

 * Rename locks in other caches for consistency.
2019-11-08 15:28:05 -08:00
Tom Marshall bd51b7cb89 Flush lbd on a timer in lbdcache instead of the main module 2019-11-08 12:24:21 -08:00
Tom Marshall a2e4f303fd Implement better caching
* Total cache size is the minimum of:
   - 1/1k of system RAM.
   - 1/64k of backing device.
   - 32 * 2 * CPUs.
   The latter ensures that we have at least 2*CPUs lbd objects
   which is required for the top level compress operations.

 * Implement LRU for all caches.

 * Track pblk full and last allocated block for better performance.

Still could use better parsing for cache_pages parameter and more
granularity, but this is workable for now.
2019-11-06 13:23:28 -08:00
Tom Marshall 3e81efb9f6 Defer releasing lbd until all I/O ops are done
This greatly increases performance.
2019-11-05 13:56:47 -08:00
Tom Marshall d7fb50911b Start implementing stats
Just pblks allocated for now.

Also update device layout info in README.
2019-11-04 12:25:54 -08:00
Tom Marshall 94551dffdd Simplify object error logic
Write error always sets error on all written pages and we always write
the first page.  So only check the first page.
2019-11-04 11:36:20 -08:00
Tom Marshall cbf8777042 Reset objects outside cache lock
Also update TODO
2019-11-04 11:36:20 -08:00
Tom Marshall 141888fa98 Make pbat len variable
Also a bit of cleanup in lbd.c
2019-11-04 11:18:28 -08:00
Tom Marshall b8395c8a83 Alloc compress state using vmalloc 2019-11-03 09:00:28 -08:00
Tom Marshall 92fc5158e2 Remove percpu comment 2019-11-03 08:31:13 -08:00
Tom Marshall 309bb2d5ef Tidy up lblk percpu stuff 2019-11-03 08:28:13 -08:00
Tom Marshall 5be74b3346 Make pblk_read/write take block_device, not cbd_params 2019-11-02 08:11:12 -07:00
Tom Marshall 11cc8a229e Use page vector and vmap for lbd 2019-11-02 07:49:09 -07:00
Tom Marshall ee7eacd4a6 First really working version 2019-11-01 14:41:11 -07:00
Tom Marshall db3d323d27 Finally fixed the pbat writeback issue 2019-10-31 14:43:22 -07:00
Tom Marshall 164a09b9aa Add zlib support
* New args to cbd format.
 * HAVE_* -> COMPRESS_HAVE_*.
 * Verify header params algorithm and compression.
 * Move percpu alloc to lbdcache.
 * Alloc percpu data in lbdcache_ctr, free in lbdcache_dtr.
2019-10-31 11:09:03 -07:00
Tom Marshall eeafc209a5 Use per-cpu lz4 state 2019-10-30 12:34:21 -07:00
Tom Marshall 0d3d79de10 Require that compression saves at least one pblk 2019-10-30 10:03:50 -07:00
Tom Marshall f8361d1e2e First fully working version 2019-10-25 10:03:00 -07:00
Tom Marshall 446a4811f6 More improvements, but still failing to clone linux 2019-10-24 13:02:03 -07:00
Tom Marshall 495d191d16 checkpoint: Mostly working
This seems to work except for I/O timing.  Reads are sync and
writes are async, so this sequence fails:
  - Object x flushes
  - Object x is reused as y
  - Another object is taken as x
  - New object x reads
  -> Stale data is read

Two potential solutions from here:

1. Implement async reads.

2. Hold ref to object until write completes.

(1) is complicated, but more correct.  Writes may stay in buffers for
quite some time (typically 5 seconds), during which time the dm-compress
object cannot be released.
2019-10-21 19:39:27 -07:00