-
Notifications
You must be signed in to change notification settings - Fork 590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(compaction): support merge compaction group #18188
Conversation
@wenym1 |
…nto li0k/storage_support_merge_cg_part
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most comments are minor. Thanks for the efforts!
src/meta/src/hummock/manager/compaction/compaction_group_schedule.rs
Outdated
Show resolved
Hide resolved
src/meta/src/hummock/manager/compaction/compaction_group_schedule.rs
Outdated
Show resolved
Hide resolved
new_version_delta.pre_apply(); | ||
|
||
// purge right_group_id | ||
compaction_groups_txn.purge(HashSet::from_iter(get_compaction_group_ids( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we directly call compaction_groups_txn.remove(...)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good Catch, We've already triggered Group Destory in the Merge Delta process, so we really don't need to purge again here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain more on this? I think here we are manipulating the CompactionGroup
metadata in CompactionGroupManager
, not in HummockVersion
. Why is this related to GroupDestroy delta?
src/storage/hummock_sdk/src/compaction_group/hummock_version_ext.rs
Outdated
Show resolved
Hide resolved
src/storage/hummock_sdk/src/compaction_group/hummock_version_ext.rs
Outdated
Show resolved
Hide resolved
…nto li0k/storage_support_merge_cg_part
…nto li0k/storage_support_merge_cg_part
…nto li0k/storage_support_merge_cg_part
…nto li0k/storage_support_merge_cg_part
…nto li0k/storage_support_merge_cg_part
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Given that the sub_level_id
assignment is changed to not rely on epoch in this PR, I suggest we run another round of testing before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Triggering multiple Splits and Merges in a Backfill test |
…nto li0k/storage_support_merge_cg_part
…nto li0k/storage_support_merge_cg_part
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM
.get_sst_ids_by_group_id(right_group_id), | ||
) | ||
{ | ||
if !sst_id_set.insert(sst_id) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In which case will we have duplicate SST ids? Is it impossible and we just have the check here as a safeguard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before 18431, commit_epoch may generate the duplicated sst_id, This check is just for forward compatibility.
} | ||
|
||
// TODO(li0k): remove this check (Since the current split_sst does not change key_range, this check can not be removed, otherwise concate will fail.) | ||
// check branched sst on non-overlap level |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: this comment is stale?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, we can always maintain this check
for level_idx in 0..max_level { | ||
let left_level = left_levels.levels.get(level_idx).unwrap(); | ||
let right_level = right_levels.levels.get(level_idx).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is confusing that levels.get(level_idx)
is getting level level_idx + 1
. At the first glance, I thought we are checking level 0 as well here. How about using the get_level
here?
…nto li0k/storage_support_merge_cg_part
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
part of #17898, This PR introduces a new ability to merge existing Compaction Groups by adding a new GroupMerge delta, some changes introduced in this pr:
GroupSplit
timer event toGroupSchedule
Checklist
./risedev check
(or alias,./risedev c
)Documentation
Release note
If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.