Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add disposition to blueprint disks #7169

Merged
merged 5 commits into from
Nov 26, 2024
Merged

Conversation

andrewjstone
Copy link
Contributor

This is the second PR related to fixing #7098.

BlueprintPhysicalDisksConfig is no longer an alias of OmicronPhysicalDisksConfig, but instead wraps a new type BlueprintPhysicalDiskConfig which itself wraps an OmicronPhysicalDiskConfig and a BlueprintPhysicalDiskDisposition. This change brings blueprint physical disks in line with blueprint datasets and zones such that expungement is a first class notion in the blueprint, and not implicit in the disk not being present in a blueprint.

This change only adds the new types and makes them work with the existing code. The underlying logic around expungement and decommissioning has not changed. That will come in a follow up PR.

This is the second PR related to fixing #7098.

BlueprintPhysicalDisksConfig is no longer an alias of
OmicronPhysicalDisksConfig, but instead wraps a new
type BlueprintPhysicalDiskConfig which itself wraps an
OmicronPhysicalDiskConfig and a BlueprintPhysicalDiskDisposition.
This change brings blueprint physical disks in line with blueprint
datasets and zones such that expungement is a first class notion in
the blueprint, and not implicit in the disk not being present in a
blueprint.

This change only adds the new types and makes them work with
the existing code. The underlying logic around expungement and
decommissioning has not changed. That will come in a follow up PR.
@@ -0,0 +1,7 @@
ALTER TABLE omicron.public.bp_omicron_physical_disk
ADD COLUMN IF NOT EXISTS disposition omicron.public.bp_physical_disk_disposition
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the schema migration test failure is legit: IIRC this puts the column at the end of the table; you'll have to do the same in dbinit.sql to get it passing. (Unless there's a way to put a column in the middle? I don't think there is but would love to be wrong.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. Fixed in the latest push.

})
.collect();
sled_info.request.disks = OmicronPhysicalDisksConfig {
sled_info.request.disks = BlueprintPhysicalDisksConfig {
generation: Generation::new(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this didn't change on this PR, but in the context of #7162 (comment) should this be Generation::new().next()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch!

#[derive(Clone, Debug, Deserialize, Serialize, JsonSchema, PartialEq, Eq)]
pub struct BlueprintPhysicalDiskConfig {
pub disposition: BlueprintPhysicalDiskDisposition,
pub config: OmicronPhysicalDiskConfig,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With apologies for a potentially-really-annoying suggestion: since we have to impl From<BlueprintPhysicalDiskConfig> for OmicronPhysicalDiskConfig anyway, should we inline all the fields of OmicronPhysicalDiskConfig here instead of embedding the whole struct as-is? IIRC we did that for zones, and it would avoid a lot of the config.config.fieldname stuttering that's now required in a bunch of places.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is annoying both in the code as it exists and thinking about changing it lol. I usually like nesting structures, but this did become a bit ridiculous at the call sites. I'll change it as you suggest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 3f52153

@@ -18,6 +18,68 @@
}
},
"definitions": {
"BlueprintPhysicalDiskConfig": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to bump to RSS service plan v6? This change isn't backwards compatible 😬

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damnit, I did consider this, but not closely enough. I was thinking this was one of the DeployStepVersions and that changing the last one wouldn't matter. Practically speaking, I'm not sure this does matter, since the plan only gets reloaded on RSS failing, which only happens on new runs of RSS. But for good measure, I should probably make a v6.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure it makes sense to add another version. The plan is only read during RSS time and we never clean up the chain of reads. This does mean that we'll have 2 formats of v5 in the wild, but the old ones are plans that never get loaded again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jgallagher and I discussed this, and then we discussed in update huddle. It turns out that the plans aren't really useful at all given that we can't actually restart RSS without a clean-slate. Therefore the current plan (no pun intended) is to get rid of them altogether. I'm going to open an issue about this shortly and then open a PR resolving it.

For now though, I'm not going to go back and add a v6.

@andrewjstone andrewjstone enabled auto-merge (squash) November 26, 2024 22:50
@andrewjstone andrewjstone merged commit fd50d18 into main Nov 26, 2024
17 checks passed
@andrewjstone andrewjstone deleted the disposition-in-blueprint-disks branch November 26, 2024 22:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants