-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(storage): Support for repairing the size of a split sst based on table stats #18053
Conversation
…nto li0k/storage_split_with_size
…nto li0k/storage_split_with_size
…nto li0k/storage_split_with_size
…nto li0k/storage_split_with_size
self.table_ids.insert(table_id); | ||
self.finalize_last_table_stats(); | ||
self.last_table_id = Some(table_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.last_table_id
is used in L199-L232. Prior to this PR, we update self.last_table_id
prior to L199. Will this be a problem? We need to carefully whether all the variables we update here are unused in L199-L232 as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I confirmed that in L199-L232 last_table_id
is only used in the log and no logic involved, I added table_id to the log.
The change is just to make sure that the table_id toggle is after the build_block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a return
in L215. That means the table id will not be changed before returning. Will this be an issue?
new_sst_size, | ||
); | ||
|
||
// FIXME(li0k): We would change table_ids inside the `split_sst` function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactor it , PTAL
…nto li0k/storage_split_with_size
…nto li0k/storage_split_with_size
@@ -605,16 +605,6 @@ impl HummockManager { | |||
drop(compaction_guard); | |||
self.report_compact_tasks(canceled_tasks).await?; | |||
|
|||
// Don't trigger compactions if we enable deterministic compaction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after 2ccdb93
SstableInfo's table_ids
have already been modified in split_sst
, so there is no need to trigger SpaceRecliam.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if all the table ids of a SST is split out? I think in our current implementation, we will have two SST: one with empty table id and the other one with identical table id as before the split.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SstableInfo with empty table_ids will be cleaned up immediately in the apply_version_delta phase.
…nto li0k/storage_split_with_size
{ | ||
// fill total_compressed_size | ||
|
||
let mut last_total_compressed_size = 0; | ||
let mut last_table_id = None; | ||
for block_meta in &meta.block_metas { | ||
let block_table_id = block_meta.table_id(); | ||
if last_table_id.is_none() || last_table_id.unwrap() != block_table_id.table_id() { | ||
if last_table_id.is_some() { | ||
self.table_stats | ||
.get_mut(&last_table_id.unwrap()) | ||
.unwrap() | ||
.total_compressed_size = last_total_compressed_size; | ||
} | ||
|
||
last_table_id = Some(block_table_id.table_id()); | ||
last_total_compressed_size = 0; | ||
} | ||
last_total_compressed_size += block_meta.len as u64; | ||
} | ||
|
||
if last_total_compressed_size != 0 { | ||
self.table_stats | ||
.get_mut(&last_table_id.unwrap()) | ||
.unwrap() | ||
.total_compressed_size = last_total_compressed_size; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: there are so many if
s in this simple logic, how about making it more concise:
if !meta.block_metas.is_empty() {
// fill total_compressed_size
let mut last_table_id = meta.block_metas[0].table_id();
let mut last_table_stats = self.table_stats.get_mut(&last_table_id).unwrap();
for block_meta in &meta.block_metas {
let block_table_id = block_meta.table_id();
if last_table_id != block_table_id {
last_table_id = block_table_id;
last_table_stats = self.table_stats.get_mut(&last_table_id).unwrap();
}
last_table_stats.total_compressed_size += block_meta.len as u64;
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, done
…nto li0k/storage_split_with_size
…nto li0k/storage_split_with_size
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
Hummock corrects the commit_epoch commit sst based on the current compaction group mapping and places the newly generated sst in the correct compaction group by
split sst
.Wrong commit sst size may cause compaction jitter in some compaction group, this PR re-estimates the size of the newly generated sst based on table_stats.
notes: After this pr, the group by logic in the shared buffer compact can be removed and CN only submit one grouped sst to the meta at each checkpoint.
Checklist
./risedev check
(or alias,./risedev c
)Documentation
Release note
If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.