-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disk_backend: add new layered disk #404
Conversation
Add a new disk type `LayeredDisk` that is composed of multiple layers, each of which can contribute an arbitrary subset of the sectors of the composed disk. This is useful for modeling differencing VHDs, the RAM diff disk, and the upcoming sqlite-backed disk. It can be used to providing disk snapshots and read and write caches. In this PR, only the RAM disk and the full disk layers are defined.
buffers: &'a RequestBuffers<'_>, | ||
sector: u64, | ||
bitmap: SectorMarker<'a>, | ||
) -> Pin<Box<dyn 'a + Future<Output = Result<(), DiskError>> + Send>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't these be using StackFuture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ehh. We need to reevaluate StackFuture
anyway, since a lot has changed since we started using it in OpenHCL.
|
||
bits_set += range.set_count(); | ||
|
||
// TODO: populate read cache(s). Note that we need to detect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no qualms with this TODO, since I don't actually need read cache functionality for some of the test infra stuff I want to hack on. migrating certain test infra over to blob_disk would be great, but it continues to be non-critical
/// | ||
/// FUTURE: allocate shared memory here so that the disk can be migrated between | ||
/// processes. | ||
#[derive(MeshPayload)] | ||
pub struct RamDiskHandle { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could consider keeping this around as a special case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ok with a bit of aliasing in cases of common disk configurations that just-so-happen to use LayeredDisk under-the-hood. i.e: I would probably have SqlDisk also have its own handle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dunno. That seems like a lot of redundancy if you're going to do it for SqlDisk
, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in that case, I'd avoid special-casing anything
lower: disk_open(inner, true)?, | ||
Resource::new(disk_backend_resources::LayeredDiskHandle { | ||
layers: vec![ | ||
RamDiskLayerHandle { len: None }.into_resource().into(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to my eyes, I would've expected to see some kind of indication that the RamDiskLayer
was implementing diff-disk semantics here (e.g: some sort of bool to configure the semantics). instead - it kinda just looks like you have a regular ram disk on-top of a underlying disk (i.e: a useless configuration)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A RAM disk layer starts out with all sectors "not present", not zero. When a read falls off the end of a layered disk (because the sector was not present in any of the layers), we zero out the sector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I just need to get used to the different semantics that Layers have, vs. typical Disks
fn optimal_unmap_sectors(&self) -> u32 { | ||
1 | ||
} | ||
|
||
fn attach(&mut self, lower_sector_count: Option<u64>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is a lifecycle method, so I'd probably go with on_attach
instead
use disk_backend::layered::LayeredDisk; | ||
use disk_backend::layered::SectorMarker; | ||
use disk_backend::layered::UnmapBehavior; | ||
use disk_backend::layered::WriteNoOverwrite; | ||
use disk_backend::zerodisk::InvalidGeometry; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might be a good PR to fix this weird zerodisk dependency in ramdisk. not major, but a weird dep to see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed zerodisk since we don't use it for anything anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, it serves as a simple "null disk" example implementation, but sure, we can just add it back in the future if we ever need it...
A few lingering comments about organization aside, this LGTM. Looking forward to rebasing #380 on top of this! |
Add a new disk type
LayeredDisk
that is composed of multiple layers, each of which can contribute an arbitrary subset of the sectors of the composed disk.This is useful for modeling differencing VHDs, the RAM diff disk, and the upcoming sqlite-backed disk. It can be used to providing disk snapshots and read and write caches.
In this PR, only the RAM disk and the full disk layers are defined.