You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// We collect all pages and put them into the cache.
let pages = page_reader.collect::<Result<Vec<_>>>()?;
let page_value = Arc::new(PageValue::new(pages));
let page_key = PageKey{
region_id:self.region_id,
file_id:self.file_id,
row_group_idx:self.row_group_idx,
column_idx: i,
};
cache.put_pages(page_key, page_value.clone());
This might impact performance if we choose a larger row group size or if the row group contains multiple pages. We might cache individual keys for each column in a row group.
Implementation challenges
The reader has to load a compressed page into memory before decompressing it into pages
What type of enhancement is this?
Performance
What does the enhancement do?
Now we cache pages for the whole row group if there is a cache miss.
greptimedb/src/mito2/src/sst/parquet/row_group.rs
Lines 241 to 250 in 102e43a
This might impact performance if we choose a larger row group size or if the row group contains multiple pages. We might cache individual keys for each column in a row group.
Implementation challenges
The reader has to load a compressed page into memory before decompressing it into pages
greptimedb/src/mito2/src/sst/parquet/row_group.rs
Lines 271 to 282 in 102e43a
We might need to maintain two kinds of keys in the page cache
The text was updated successfully, but these errors were encountered: