diff --git a/README.adoc b/README.adoc index c2473a7a07a..80753e030f3 100644 --- a/README.adoc +++ b/README.adoc @@ -208,12 +208,14 @@ We also use these OpenAPI documents as the source for the clients we generate using https://github.com/oxidecomputer/progenitor[Progenitor]. Clients are automatically updated when the coresponding OpenAPI document is modified. -There are currently two kinds of services based on how their corresponding documents are generated: *managed* and *unmanaged*. Eventually, all services within Omicron will transition to being managed. +OpenAPI documents are tracked by the `cargo xtask openapi` command. -* A *managed* service is tracked by the `cargo xtask openapi` command, using Dropshot's relatively new API trait functionality. -* An *unmanaged* service is defined the traditional way, by gluing together a set of implementation functions, and is tracked by an independent test. +* To regenerate all OpenAPI documents, run `cargo xtask openapi generate`. +* To check whether all OpenAPI documents are up-to-date, run `cargo xtask + openapi check`. -To check whether your document is managed, run `cargo xtask openapi list`: it will list out all managed OpenAPI documents. If your document is not on the list, it is unmanaged. +For more information, see the documentation in +link:./dev-tools/openapi-manager[`dev-tools/openapi-manager`]. Note that Omicron contains a nominally circular dependency: @@ -225,33 +227,6 @@ Note that Omicron contains a nominally circular dependency: We effectively "break" this circular dependency by virtue of the OpenAPI documents being checked in. -==== Updating or Creating New Managed Services - -See the documentation in link:./dev-tools/openapi-manager[`dev-tools/openapi-manager`] for more information. - -==== Updating Unmanaged Services - -In general, changes to unmanaged service APs **require the following set of build steps**: - -. Make changes to the service API. -. Update the OpenAPI document by running the relevant test with overwrite set: - `EXPECTORATE=overwrite cargo nextest run -p -- test_nexus_openapi_internal` - (changing the package name and test name as necessary). It's important to do - this _before_ the next step. -. This will cause the generated client to be updated which may break the build - for dependent consumers. -. Modify any dependent services to fix calls to the generated client. - -Note that if you make changes to both Nexus and Sled Agent simultaneously, you -may end up in a spot where neither can build and therefore neither OpenAPI -document can be generated. In this case, revert or comment out changes in one -so that the OpenAPI document can be generated. - -This is a particular problem if you find yourself resolving merge conflicts in the generated files. You have basically two options for this: - -* Resolve the merge conflicts by hand. This is usually not too bad in practice. -* Take the upstream copy of the file, back out your client side changes (`git stash` and its `-p` option can be helpful for this), follow the steps above to regenerate the file using the automated test, and finally re-apply your changes to the client side. This is essentially getting yourself back to step 1 above and then following the procedure above. - === Resolving merge conflicts in Cargo.lock When pulling in new changes from upstream "main", you may find conflicts in Cargo.lock. The easiest way to deal with these is usually to take the upstream changes as-is, then trigger any Cargo operation that updates the lockfile. `cargo metadata` is a quick one. Here's an example: diff --git a/common/src/api/internal/nexus.rs b/common/src/api/internal/nexus.rs index 47f0d7c59d2..cec44d3f438 100644 --- a/common/src/api/internal/nexus.rs +++ b/common/src/api/internal/nexus.rs @@ -286,31 +286,15 @@ pub struct UpdateArtifactId { // Adding a new KnownArtifactKind // =============================== // -// Adding a new update artifact kind is a tricky process. To do so: +// To add a new kind of update artifact: // // 1. Add it here. +// 2. Regenerate OpenAPI documents with `cargo xtask openapi generate` -- this +// should work without any compile errors. +// 3. Run `cargo check --all-targets` to resolve compile errors. // -// 2. Add the new kind to /clients/src/lib.rs. -// The mapping from `UpdateArtifactKind::*` to `types::UpdateArtifactKind::*` -// must be left as a `todo!()` for now; `types::UpdateArtifactKind` will not -// be updated with the new variant until step 5 below. -// -// 4. Add the new kind and the mapping to its `update_artifact_kind` to -// /nexus/db-model/src/update_artifact.rs -// -// 5. Regenerate the OpenAPI specs for nexus and sled-agent: -// -// ``` -// EXPECTORATE=overwrite cargo nextest run -p omicron-nexus -p omicron-sled-agent openapi -// ``` -// -// 6. Return to /{nexus-client,sled-agent-client}/lib.rs from step 2 -// and replace the `todo!()`s with the new `types::UpdateArtifactKind::*` -// variant. -// -// See https://github.com/oxidecomputer/omicron/pull/2300 as an example. -// -// NOTE: KnownArtifactKind has to be in snake_case due to openapi-lint requirements. +// NOTE: KnownArtifactKind has to be in snake_case due to openapi-lint +// requirements. /// Kinds of update artifacts, as used by Nexus to determine what updates are available and by /// sled-agent to determine how to apply an update when asked. diff --git a/dev-tools/openapi-manager/README.adoc b/dev-tools/openapi-manager/README.adoc index 1aadaa2c0cf..e6b28b44f64 100644 --- a/dev-tools/openapi-manager/README.adoc +++ b/dev-tools/openapi-manager/README.adoc @@ -4,19 +4,15 @@ This tool manages the OpenAPI documents (JSON files) checked into Omicron's `ope NOTE: For more information about API traits, see https://rfd.shared.oxide.computer/rfd/0479[RFD 479]. -Currently, a subset of OpenAPI documents is managed by this tool. Eventually, all of the OpenAPI documents in Omicron will be managed by this tool; work to make that happen is ongoing. - -To check whether your document is managed, run `cargo xtask openapi list`: it will list out all managed OpenAPI documents. If your document is not on the list, it is unmanaged. - == Basic usage The OpenAPI manager is meant to be invoked via `cargo xtask openapi`. Currently, three commands are provided: -* `cargo xtask openapi list`: List information about currently-managed documents. -* `cargo xtask openapi check`: Check that all of the managed documents are up-to-date. +* `cargo xtask openapi list`: List information about OpenAPI documents. +* `cargo xtask openapi check`: Check that all of the documents are up-to-date. * `cargo xtask openapi generate`: Update and generate OpenAPI documents. -There is also a test which makes sure that all managed documents are up-to-date, and tells you to run `cargo xtask openapi generate` if they aren't. +There is also a test which makes sure that all documents are up-to-date, and tells you to run `cargo xtask openapi generate` if they aren't. === API crates [[api_crates]] @@ -49,40 +45,13 @@ In the implementation crate: . Add a dependency on the API crate. . Following the example in https://rfd.shared.oxide.computer/rfd/0479#guide_api_implementation[RFD 479's _API implementation_], provide an implementation of the trait. -Once the API crate is defined, perform the steps in <> below. - -=== Converting existing documents - -Existing, unmanaged documents are generated via *function-based servers*: a set of functions that some code combines into a Dropshot `ApiDescription`. (There is also likely an expectorate test which ensures that the document is up-to-date.) - -The first step is to convert the function-based server into an API trait. To do so, create an API crate (see <> above). - -. Add the API crate to the workspace's `Cargo.toml`: `members` and `default-members`, and a reference in `[workspace.dependencies]`. -. Follow the instructions in https://rfd.shared.oxide.computer/rfd/0479#guide_converting_functions_to_traits[RFD 479's _Converting functions to API traits_] for the API crate. - -In the implementation crate: - -. Continue following the instructions in https://rfd.shared.oxide.computer/rfd/0479#guide_converting_functions_to_traits[RFD 479's _Converting functions to API traits_] for where the endpoint functions are currently defined. -. Find the test which currently manages the document (try searching the repo for `openapi_lint::validate`). If it performs any checks on the document beyond `openapi_lint::validate` or `openapi_lint::validate_external`, see <>. - -Next, perform the steps in <> below. - -Finally, remove: - -. The test which used to manage the document. The OpenAPI manager includes a test that will automatically run in CI. -. The binary subcommand (typically called `openapi`) that generated the OpenAPI document. The test was the only practical use of this subcommand. - -=== Adding the API crate to the manager [[add_to_manager]] - Once the API crate is defined, inform the OpenAPI manager of its existence. Within this directory: . In `Cargo.toml`, add a dependency on the API crate. . In `src/spec.rs`, add the crate to the `all_apis` function. (Please keep the list sorted by filename.) -To ensure everything works well, run `cargo xtask openapi generate`. - -* Your OpenAPI document should be generated on disk and listed in the output. -* If you're converting an existing API, the only changes should be the ones you might have introduced as part of the refactor. If there are significant changes, something's gone wrong--maybe you missed an endpoint? +To ensure everything works well, run `cargo xtask openapi generate`. Your +OpenAPI document should be generated on disk and listed in the output. ==== Performing extra validation [[extra_validation]] @@ -90,10 +59,19 @@ By default, the OpenAPI manager does basic validation on the generated document. It's best to put extra validation next to the trait, within the API crate. -. In the API crate, add dependencies on `anyhow` and `openapiv3`. -. Define a function with signature `fn extra_validation(openapi: &openapiv3::OpenAPI) -> anyhow::Result<()>` which performs the extra validation steps. +. In the API crate, add dependencies on `openapiv3` and `openapi-manager-types`. +. Define a function with signature `fn validate_api(spec: &openapiv3::OpenAPI, mut cx: openapi_manager_types::ValidationContext<'_>) which performs the extra validation steps. . In `all_apis`, set the `extra_validation` field to this function. +Currently, the validator can do two things: + +. Via the `ValidationContext::report_error` function, report validation errors. +. Via the `ValidationContext::record_file_contents` function, assert the contents of other generated files. + +(This can be made richer as needed.) + +For an example, see `validate_api` in the `nexus-external-api` crate. + == Design notes The OpenAPI manager uses the new support for Dropshot API traits described in https://rfd.shared.oxide.computer/rfd/0479[RFD 479]. diff --git a/docs/adding-an-endpoint.adoc b/docs/adding-an-endpoint.adoc index cebaae4c525..d9e5c559b40 100644 --- a/docs/adding-an-endpoint.adoc +++ b/docs/adding-an-endpoint.adoc @@ -12,17 +12,21 @@ NOTE: This guide is not intended to be exhaustive, or even particularly detailed. For that, refer to the documentation which exists in the codebase -- this document should act as a jumping-off point. -=== **HTTP** -* Add endpoints for either the internal or external API -** xref:../nexus/src/external_api/http_entrypoints.rs[The External API] is customer-facing, and provides interfaces for both developers and operators -** xref:../nexus/src/internal_api/http_entrypoints.rs[The Internal API] is internal, and provides interfaces for services on the Oxide rack (such as the Sled Agent) to call -** Register endpoints in the `register_endpoints` method (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/external_api/http_entrypoints.rs#L84[Example]) +== **HTTP** + +* Add endpoint _definitions_ for either the internal or external API +** xref:../nexus/external-api/src/lib.rs[The External API] is customer-facing, and provides interfaces for both developers and operators +** xref:../nexus/internal-api/src/lib.rs[The Internal API] is internal, and provides interfaces for services on the Oxide rack (such as the Sled Agent) to call +* Add the corresponding _implementations_ to the respective `http_entrypoints.rs` files: +** xref:../nexus/src/external_api/http_entrypoints.rs[The External API's `http_entrypoints.rs`] +** xref:../nexus/src/internal_api/http_entrypoints.rs[The Internal API's `http_entrypoints.rs`] ** These endpoints typically call into the *Application* layer, and do not access the database directly * Inputs and Outputs ** Input parameters are defined in https://github.com/oxidecomputer/omicron/blob/main/nexus/types/src/external_api/params.rs[params.rs] (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/types/src/external_api/params.rs#L587-L601[Example]) ** Output views are defined in https://github.com/oxidecomputer/omicron/blob/main/nexus/types/src/external_api/views.rs[views.rs] (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/types/src/external_api/views.rs#L270-L274[Example]) -=== **Lookup & Authorization** +== **Lookup & Authorization** + * Declare a new resource-to-be-looked-up via `lookup_resource!` in xref:../nexus/src/db/lookup.rs[lookup.rs] (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/db/lookup.rs#L557-L564[Example]) ** This defines a new struct named after your resource, with some https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/db-macros/src/lookup.rs#L521-L628[auto-generated methods], including `lookup_for` (look up the authz object), `fetch_for` (look up and return the object), and more * Add helper functions to `LookupPath` to make it possible to fetch the resource by either UUID or name (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/db/lookup.rs#L225-L237[Example]) @@ -32,12 +36,14 @@ this document should act as a jumping-off point. ** If you define `polar_snippet = Custom`, you should edit the omicron.polar file to describe the authorization policy for your object (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/authz/omicron.polar#L376-L393[Example]) * Either way, you should add reference the new resource when https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/authz/oso_generic.rs#L119-L148[constructing the Oso structure] -=== **Application** +== **Application** + * Add any "business logic" for the resource to xref:../nexus/src/app[the app directory] * This layer bridges the gap between the database and external services. * If your application logic involes any multi-step operations which would be interrupted by Nexus stopping mid-execution (due to reboot, crash, failure, etc), it is recommended to use a https://github.com/oxidecomputer/omicron/tree/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/src/app/sagas[saga] to define the operations durably. -=== **Database** +== **Database** + * `CREATE TABLE` for the resource in xref:../schema/crdb/dbinit.sql[dbinit.sql] (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/common/src/sql/dbinit.sql#L1103-L1129[Example]) * Add an equivalent schema for the resource in xref:../nexus/db-model/src/schema.rs[schema.rs], which allows https://docs.diesel.rs/master/diesel/index.html[Diesel] to translate raw SQL to rust queries (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/db-model/src/schema.rs#L144-L155[Example]) * Add a Rust representation of the database object to xref:../nexus/db-model/src[the DB model] (https://github.com/oxidecomputer/omicron/blob/1dfe47c1b3122bc4f32a9c517cb31b1600581ea2/nexus/db-model/src/ip_pool.rs#L24-L40[Example]) @@ -48,22 +54,10 @@ this document should act as a jumping-off point. * Authorization ** There exists a https://github.com/oxidecomputer/omicron/blob/main/nexus/src/authz/policy_test[policy test] which compares all Oso objects against an expected policy. New resources are usually added to https://github.com/oxidecomputer/omicron/blob/main/nexus/src/authz/policy_test/resources.rs[resources.rs] to get coverage. -* openapi -** Nexus generates a new openapi spec from the dropshot endpoints. If you modify endpoints, you'll need to update openapi JSON files. -*** The following commands may be used to update APIs: -+ -[source, rust] ----- -$ cargo run -p omicron-nexus --bin nexus -- -I nexus/examples/config.toml > openapi/nexus-internal.json -$ cargo run -p omicron-nexus --bin nexus -- -O nexus/examples/config.toml > openapi/nexus.json -$ cargo run -p omicron-sled-agent --bin sled-agent -- openapi > openapi/sled-agent.json ----- -*** Alternative, you can run: -+ -[source, rust] ----- -$ EXPECTORATE=overwrite cargo test_nexus_openapi test_nexus_openapi_internal test_sled_agent_openapi_sled ----- +* OpenAPI +** Once you've added or changed endpoint definitions in `nexus-external-api` or `nexus-internal-api`, you'll need to update the corresponding OpenAPI documents (the JSON files in `openapi/`). +** To update all OpenAPI documents, run `cargo xtask openapi generate`. +** This does not require you to provide an implementation, or to get either omicron-nexus or omicron-sled-agent to compile: just the definition in the API crate is sufficient. * Integration Tests ** Nexus' https://github.com/oxidecomputer/omicron/tree/main/nexus/tests/integration_tests[integration tests] are used to cross the HTTP interface for testing. Typically, one file is used "per-resource". *** These tests use a simulated Sled Agent, and keep the "Nexus" object in-process, so it can still be accessed and modified for invasive testing. diff --git a/docs/crdb-upgrades.adoc b/docs/crdb-upgrades.adoc index 52231ee1995..66133454653 100644 --- a/docs/crdb-upgrades.adoc +++ b/docs/crdb-upgrades.adoc @@ -64,7 +64,7 @@ a tick, but they must occur in that order.) of CockroachDB versions: + .... -EXPECTORATE=overwrite cargo nextest run -p omicron-nexus -- integration_tests::commands::test_nexus_openapi_internal +cargo xtask openapi generate .... . Run the full test suite, which should catch any unexpected SQL compatibility issues between releases and help validate that your