-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IP pools data model and API rework #4261
Conversation
schema/crdb/7.0.0/up5.sql
Outdated
SELECT id, 'silo', silo_id, is_default | ||
FROM ip_pool | ||
WHERE silo_id IS NOT null | ||
AND time_deleted IS null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@smklein integration_tests::schema::versions_have_idempotent_up
is mad at me for up4
and up5
because the is_default
and silo_id
columns get deleted from ip_pool
in up7
, so these inserts fail when run a second time. I can't figure out how to make this insert conditional on those columns existing. Moving up6
and up7
into their own separate migration 8.0.0 migration makes the test pass. Is that good, though? Almost feels like too easy of a workaround.
Tested updated association post body with the TS generator and it is working exactly as desired: /**
* Parameters for associating an IP pool with a resource (fleet, silo)
*/
export type IpPoolAssociationCreate =
| { isDefault: boolean; resourceType: 'silo'; silo: NameOrId }
| { isDefault: boolean; resourceType: 'fleet' } |
schema/crdb/21.0.0/up5.sql
Outdated
WHERE ipr.resource_type = 'silo' -- technically unnecessary because there is only silo | ||
AND ipr.is_default = true | ||
AND ip.is_default = true -- both being default is the conflict being resolved | ||
AND ip.silo_id IS NOT NULL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be AND ip.silo_id IS null
b/c that's what indicates a fleet ip pool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, we're looking at silo pools in ip_pool
, lining them up against entries in ip_pool_resource
that are type=silo but came from a fleet pool in up4, preferring the former as a default. I admit this is pure evil!
As raised in Matrix, this backports a fix from #4261 where a new floating IP can be allocated using the name or ID of an IP pool which is bound to another silo. --------- Co-authored-by: David Crespo <[email protected]>
.map(|ip_pool| { | ||
let authz_pool = | ||
authz::IpPool::new(authz::FLEET, ip_pool.id(), lookup_type); | ||
(authz_pool, ip_pool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting, I haven't noticed where we've done something like this explicitly in the data store before. I'm not sure why the authz / resource tuple was necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, I found it below. This is a non-standard lookup so the id/name generated lookups that hang off of nexus wouldn't be available and thus its existence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. An edge case that doesn't really fit in the look up by name, id, or composite id model. Arguably it's a composite ID, which is why I'm using it as a sort of catchall for this.
let (authz_pool1_for_silo, ip_pool) = datastore | ||
.ip_pools_fetch_default(&opctx) | ||
.await | ||
.expect("Failed to get silo's default IP pool"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, because this doesn't have a normal lookup implemented this fetch implementation acts as one. Seems fine.
path = "/v1/system/ip-pools/{pool}/silos/{silo}", | ||
tags = ["system/networking"], | ||
}] | ||
async fn ip_pool_silo_update( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A minor nit here, but the expressed intention of this endpoint and its operation name feel disjointed. By the operation name there's an implication that you may be able to update other properties of the pool whereas you're only granted the option to update its default state. Isn't a huge deal, but I think we should eventually either expand this to be a full update or update the op name to reflect that it only impacts the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it could be ip_pool_silo_set_default
. There are no other properties of the link that can be updated. Another option would be ip_pool_silo_link_update
to make clearer it's about updating the link and not the pool or the silo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall I'm feeling pretty good about the state of this PR. There's undoubtedly areas that I've missed in review but it feels holistically better than where we where before it.
Given we have time before the next release I feel it would be helpful to land this soon and be able to iterate on it as we feel it out through usage.
9a3ffb8
to
ea4cbfa
Compare
Bugfix for #4261. Using `distinct` to eliminate dupes from the left outer join works, but it's better to just use the well-known name of the service IP pool to exclude it from whatever operations it needs to be excluded from. Potential followup related to #4762: if the ID was well-known to, we could do the `is_internal` check without having to hit the DB.
Closes #2148
Closes #4002
Closes #4003
Closes #4006
Background
#3985 (and followups #3998 and #4007) made it possible to associate an IP pool with a silo so that instances created in that silo would get their ephemeral IPs from said pool by default (i.e., without the user having to say anything other than "I want an ephemeral IP"). An IP pool associated with a silo was not accessible for ephemeral IP allocation from other silos — if a disallowed pool was specified by name at instance create time, the request would 404.
However! That was the quick version, and the data model left much to be desired. The relation was modeled by adding a nullable
silo_id
and sort-of-not-really-nullableis_default
column directly on the IP pool table, which has the following limitations (and there are probably more):default
was treated as a property of the pool itself, rather than a property of the association with another resource, which is quite strange. Even if you could associate the pool with multiple silos, you could not have it be the default for one and not for the otherproject_id
column (which I did in [nexus] Silo IP pools schema change #3981 before removing it in [nexus] Silo IP pools #3985)More broadly (and vaguely), the idea of an IP pool "knowing" about silos or projects doesn't really make sense. Entities aren't really supposed to know about each other unless they have a parent-child relationship.
Changes in this PR
No such thing as fleet-scoped pool, only silo
Thanks to @zephraph for encouraging me to make this change. It is dramatically easier to explain "link silo to IP pool" than it is to explain "link resource (fleet or silo) to IP pool". The way to recreate the behavior of a single default pool for the fleet is to simply associate a pool with all silos. Data migrations ensure that existing fleet-scoped pools will be associated with all silos. There can only be one default pool for a silo, so in the rare case where pool A is a fleet default and pool B is default on silo S, we associate both A and B with S, but only B is made silo default pool.
API
These endpoints are added. They're pretty self-explanatory.
The
silo_id
andis_default
fields are removed from theIpPool
response as they are now a property of theIpPoolLink
, not the pool itself.I also fixed the silo-scoped IP pools list (
/v1/ip-pools
) and fetch (/v1/ip-pools/{pool}
) endpoints, which a) did not actually filter for the current silo, allowing any user to fetch any pool, and b) took a spuriousproject
query param that didn't do anything.DB
The association between IP pools and fleet or silo (or eventually projects, but not here) is now modeled through a polymorphic join table called
ip_pool_resource
:65fleetFLEET_IDtrueNow, instead of setting the association with a silo or fleet at IP pool create or update time, there are separate endpoints for adding and removing an association. A pool can be associated with any number of resources, but a unique index ensures that a given resource can only have one default pool.
Default IP pool logic
If an instance ephemeral IP or a floating IP is created with a pool specified, we simply use that pool if it exists and is linked to the user's silo.
If an instance ephemeral IP or a floating IP is created without a pool unspecified, we look for a default pool for the current silo. If there is a pool linked with the current silo with
is_default=true
, use that. Otherwise, there is no default pool for the given scope and IP allocation will fail, which means the instance create or floating IP create request will fail.The difference introduced in this PR is that we do not fall back to fleet default if there is no silo default because we have removed the concept of a fleet-scoped pool.
Tests and test helpers
This is the source of a lot of noise in this PR. Because there can no longer be a fleet default pool, we can no longer rely on that for tests. The test setup was really confusing. We assumed a default IP pool existed, but we still had to populate it (add a range) if we had to do anything with it. Now, we don't assume it exists, we create it and add a range and associate it with a silo all in one helper.
What do customers have to do when they upgrade?
They should not have to do anything at upgrade time.
If they were relying on a single fleet default pool to automatically be used by new silos, when they create silos in the future they will have to manually associate each new silo with the desired pool. We are working on ways to make that easier or more automatic, but that's not in this change. It is less urgent because silo creation is an infrequent operation.
If they are not using the previously fleet default IP pool named
default
and do not want it to exist, they can simply delete any IP ranges it contains, unlink it from all silos and delete it. If they are not using it, there should not be any IPs allocated from it (which means they can delete it).