-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restrict buffered value size for blob zstd dictionary compression #197
Comments
/cc @yiwu-arbug |
yes, we saw similar issue when enabling dictionary compression on vanilla RocksDB. we end up limit the sample size to 8MB per SST. cc @hunterlxt |
We can use zstd_max_train_bytes to limit the max buffered size. @Connor1996 |
|
This "zstd_max_train_bytes" belongs to Titan's config, RocksDB has a config with the same name. |
When blob zstd dictionary compression is enabled, all the values will be buffered and replayed after the compression dictionary is finalized. So if there are multiple concurrent flushes and compactions, the memory footprint would be considerable considering the blob file size is 256MB by default.
For a small RAM instance, it may cause OOM easily. It's better to add a config to control the max concurrent buffered size.
The text was updated successfully, but these errors were encountered: