You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Compaction needs CompactionFilter, which may use DB::Get for metadata(such as pika/todis/kvrocks), in distributed compaction, compact_worker has no DB object, thus can not support such compaction.
ToplingZipTable Builder use two-pass scanning, it save decompressed kv data into tmp files, in second pass scaning, it read data from tmp file, thus we can run first pass scaning in DB side(local compation), and run second pass scanning in compaction worker to compress data -- compressing consumes 80+% CPU time for ToplingZipTable.
The text was updated successfully, but these errors were encountered:
rockeet
changed the title
ToplingZipTable Builder: support remote compression
ToplingZipTable Builder: support distributed compression
May 24, 2023
rockeet
changed the title
ToplingZipTable Builder: support distributed compression
ToplingZipTable Builder: support distributed compressing
Nov 7, 2023
Compaction needs CompactionFilter, which may use
DB::Get
for metadata(such as pika/todis/kvrocks), in distributed compaction, compact_worker has noDB
object, thus can not support such compaction.ToplingZipTable Builder use two-pass scanning, it save decompressed kv data into tmp files, in second pass scaning, it read data from tmp file, thus we can run first pass scaning in DB side(local compation), and run second pass scanning in compaction worker to compress data -- compressing consumes
80+%
CPU time for ToplingZipTable.The text was updated successfully, but these errors were encountered: