diff --git a/docs/deploy-guide/index.md b/docs/deploy-guide/index.md
index 7c248feb5..b6202384c 100644
--- a/docs/deploy-guide/index.md
+++ b/docs/deploy-guide/index.md
@@ -30,17 +30,20 @@ flowchart TD
B[Global] --> E[Region N];
```
-A fully functioning system only needs one Management environment, one Global
-environment and one or more Regions. In this configuration, the Management
-environment is responsible for utilizing the [GitOps][gitops] tool to deploy
-the expected state to all other environments. The Global environment is
+A fully functioning system only needs one _Management_ environment, one _Global_
+environment and one or more _Region_ environment(s). In this configuration,
+the _Management_ environment is responsible for utilizing our [GitOps][gitops]
+tool, [ArgoCD][argocd] to deploy the expected state to all other environments.
+While the _Global_ environment is
responsible for hosting any services that are expected to exist only once
-for a whole system deployment such as the DCIM/IPAM tool. While the Region
+for a whole system deployment such as the DCIM/IPAM tool. While the _Region_
environments will run the tools and services that need to live close to the
actual hardware.
-In fact, one Management environment can control multiple systems; for example,
-a staging environment and a production environment.
+In fact, one _Management_ environment can control multiple _Global_ environments
+and their associated _Region_ environments. We call the grouping of the _Global_
+environment and it's associated _Region_ environments a _partition_. An example
+would be a staging partition and a production partition.
```mermaid
flowchart TD
@@ -49,10 +52,11 @@ flowchart TD
B[Global] --> C[Region A];
B[Global] --> D[Region B...];
B[Global] --> E[Region N];
- A[Management] --> |prod| F[Global];
- F[Global] --> G[Region A];
- F[Global] --> H[Region B...];
- F[Global] --> I[Region N];
+ A[Management] --> |production| F[Global];
+ F[Global] --> G[Region D];
+ F[Global] --> H[Region E...];
+ F[Global] --> I[Region Z];
```
+[argocd]:
[gitops]:
diff --git a/docs/deploy-guide/install-understack-ubuntu-k3s.md b/docs/deploy-guide/install-understack-ubuntu-k3s.md
deleted file mode 100644
index 766dea95f..000000000
--- a/docs/deploy-guide/install-understack-ubuntu-k3s.md
+++ /dev/null
@@ -1,407 +0,0 @@
-# Installing UnderStack on Ubuntu 22.04 + K3s
-
-## Get UnderStack
-
-First, let's git clone the understack repo:
-
-```bash
-git clone https://github.com/rackerlabs/understack.git
-```
-
-## Install Pre-requisites
-
-Install some packages we'll need later and some useful troubleshooting utilities.
-
-```bash
-apt-get -y install curl jq net-tools telnet git apt-transport-https wget
-```
-
-## Update Ubuntu
-
-Update to the latest ubuntu packages and reboot if necessary.
-
-```bash
-apt-get -y update
-```
-
-## Install K3s
-
-We're using k3s for a lightweight kubernetes install.
-
-Note the `INSTALL_K3S_EXEC` options used:
-
-* Disable traefik servicelb because we're using ingress-nginx and MetalLB
-* Change cluster-cidr and service-cidr options because k3s defaults to using 10.x.x.x IP ranges, but we're already using 10.x.x.x internally
-* Add a node label `openstack-control-plane=enabled` which is needed for the OpenStack components
-
-```bash
-curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb --cluster-cidr=172.20.0.0/16 --service-cidr=172.21.0.0/16 --node-label=openstack-control-plane=enabled" sh -
-```
-
-References:
-
-* [https://docs.k3s.io/](https://docs.k3s.io/)
-
-## Install Helm
-
-The K3s installer will install kubectl, but we'll also need helm for the UnderStack install.
-
-```bash
-curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
-sudo apt-get install apt-transport-https --yes
-echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
-sudo apt-get update
-sudo apt-get install helm
-```
-
-References:
-
-* [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/)
-
-## Install Kustomize
-
-```bash
-curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
-sudo mv kustomize /usr/bin
-```
-
-References:
-
-* [https://kubectl.docs.kubernetes.io/installation/kustomize/](https://kubectl.docs.kubernetes.io/installation/kustomize/)
-
-## Install Kubeseal
-
-```bash
-wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.26.0/kubeseal-0.26.0-linux-amd64.tar.gz
-tar xzf kubeseal-0.26.0-linux-amd64.tar.gz
-sudo mv kubeseal /usr/bin
-```
-
-References:
-
-* [https://github.com/bitnami-labs/sealed-secrets?tab=readme-ov-file#installation](https://github.com/bitnami-labs/sealed-secrets?tab=readme-ov-file#installation)
-
-## Test Kubernetes
-
-K3s installer should give us a working kubernetes.
-
-The kubectl config from k3s is in `/etc/rancher/k3s/k3s.yaml` and kubectl will automatically use it.
-
-See everything running in the new k3s kubernetes cluster:
-
-```bash
-kubectl get all --all-namespaces
-```
-
-## Install UnderStack
-
-Get the repo:
-
-```bash
-git clone https://github.com/rackerlabs/understack.git
-```
-
-References:
-
-* [https://github.com/rackerlabs/understack](https://github.com/rackerlabs/understack)
-
-### Bootstrap UnderStack
-
-Run the initial bootstrap:
-
-```bash
-./bootstrap/bootstrap.sh
-```
-
-Wait a couple minutes for it to finish bootstrapping. Then:
-
-```bash
-kubectl -n argocd apply -k apps/operators/
-```
-
-Generate secrets:
-
-```bash
-# (optional) copy rancher kubectl config to ~/.kube/config
-# cp /etc/rancher/k3s/k3s.yaml /root/.kube/config && chmod go-rwx /root/.kube/config
-
-# generate secrets
-./scripts/easy-secrets-gen.sh
-
-# make the namespaces where the secrets will live
-kubectl create ns openstack
-kubectl create ns nautobot
-```
-
-```bash
-kubectl -n argocd apply -k apps/components/
-```
-
-### Bootstrap: Phase 1 Complete
-
-After a couple minutes, the initial UnderStack bootstrap phase will complete,
-and your cluster should look similar to the output below.
-
-Notice we have the following components available:
-
-* mariadb
-* postgres
-* rabbitmq
-* argo workflows
-* argo cd
-* ingress-nginx
-* nautobot
-* cert-manager
-
-```bash
-# kubectl get all --all-namespaces
-NAMESPACE NAME READY STATUS RESTARTS AGE
-kube-system pod/local-path-provisioner-84db5d44d9-ft4s5 1/1 Running 0 169m
-kube-system pod/coredns-6799fbcd5-4swtb 1/1 Running 0 169m
-kube-system pod/svclb-traefik-986d0605-wvkcp 2/2 Running 0 169m
-kube-system pod/helm-install-traefik-crd-zfbpj 0/1 Completed 0 169m
-kube-system pod/helm-install-traefik-4ngnx 0/1 Completed 1 169m
-kube-system pod/traefik-f4564c4f4-mlqbz 1/1 Running 0 169m
-kube-system pod/metrics-server-67c658944b-22r2s 1/1 Running 0 169m
-kube-system pod/svclb-ingress-nginx-controller-169e70b9-b769n 0/2 Pending 0 81m
-argocd pod/argo-cd-argocd-redis-58779b9ddf-vxd82 1/1 Running 0 81m
-ingress-nginx pod/ingress-nginx-admission-create-zpxr7 0/1 Completed 0 81m
-ingress-nginx pod/ingress-nginx-admission-patch-xxbhl 0/1 Completed 1 81m
-kube-system pod/sealed-secrets-controller-58bfb4d565-vwlrz 1/1 Running 0 81m
-argocd pod/argo-cd-argocd-repo-server-889b6979c-7vgbv 1/1 Running 0 81m
-argocd pod/argo-cd-argocd-server-7c665bdb99-7jrk2 1/1 Running 0 81m
-argocd pod/argo-cd-argocd-application-controller-0 1/1 Running 0 81m
-ingress-nginx pod/ingress-nginx-controller-6858749594-svlmt 1/1 Running 0 81m
-cert-manager pod/cert-manager-5c9d8879fd-msnhr 1/1 Running 0 32m
-cert-manager pod/cert-manager-cainjector-6cc9b5f678-ksnz7 1/1 Running 0 32m
-cert-manager pod/cert-manager-webhook-7bb7b75848-mw7xl 1/1 Running 0 32m
-rabbitmq-system pod/rabbitmq-cluster-operator-ccf488f4c-8jntf 1/1 Running 0 13m
-rabbitmq-system pod/messaging-topology-operator-85486d7848-ss9wv 1/1 Running 0 13m
-postgres-operator pod/pgo-6d794c46cf-nddkq 1/1 Running 0 13m
-mariadb-operator pod/mariadb-operator-5644c8d7df-w4wj9 1/1 Running 0 13m
-mariadb-operator pod/mariadb-operator-webhook-74f4b57d9d-z5hz7 1/1 Running 0 13m
-mariadb-operator pod/mariadb-operator-cert-controller-6586cb7db6-dfhlt 1/1 Running 0 13m
-argo pod/workflow-controller-954b4d959-vr2fg 1/1 Running 0 9m9s
-argo-events pod/events-webhook-984788f96-rqmzm 1/1 Running 0 9m9s
-argo-events pod/controller-manager-5d97d79554-sn6tf 1/1 Running 0 9m9s
-nautobot pod/nautobot-repo-host-0 2/2 Running 0 9m9s
-argo pod/argo-server-5df77fdc67-mzm8p 1/1 Running 0 9m9s
-openstack pod/memcached-56458c6c9c-k855d 2/2 Running 0 8m55s
-openstack pod/mariadb-0 1/1 Running 0 9m10s
-nautobot pod/nautobot-redis-master-0 1/1 Running 0 8m55s
-nautobot pod/nautobot-celery-default-545d857c5-4lqsl 1/1 Running 2 (8m35s ago) 9m10s
-openstack pod/rabbitmq-server-0 1/1 Running 0 9m9s
-nautobot pod/nautobot-backup-sg6f-fq2m8 0/1 Completed 0 8m45s
-nautobot pod/nautobot-default-844d45bf7-pthnd 1/1 Running 0 9m10s
-nautobot pod/nautobot-default-844d45bf7-8blht 1/1 Running 0 9m10s
-nautobot pod/nautobot-celery-beat-7764fb8b6c-vwhkp 1/1 Running 5 (6m43s ago) 9m10s
-nautobot pod/nautobot-instance1-v8mc-0 4/4 Running 0 9m9s
-
-NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-default service/kubernetes ClusterIP 10.43.0.1 443/TCP 170m
-kube-system service/kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 169m
-kube-system service/metrics-server ClusterIP 10.43.140.224 443/TCP 169m
-kube-system service/traefik LoadBalancer 10.43.240.66 172.27.232.20 80:31301/TCP,443:31743/TCP 169m
-argocd service/argo-cd-argocd-redis ClusterIP 10.43.50.168 6379/TCP 81m
-argocd service/argo-cd-argocd-repo-server ClusterIP 10.43.55.16 8081/TCP 81m
-argocd service/argo-cd-argocd-server ClusterIP 10.43.136.60 80/TCP,443/TCP 81m
-ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.247.18 80:31390/TCP,443:31335/TCP 81m
-ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.138.89 443/TCP 81m
-kube-system service/sealed-secrets-controller ClusterIP 10.43.205.142 8080/TCP 81m
-kube-system service/sealed-secrets-controller-metrics ClusterIP 10.43.51.95 8081/TCP 81m
-cert-manager service/cert-manager ClusterIP 10.43.206.85 9402/TCP 32m
-cert-manager service/cert-manager-webhook ClusterIP 10.43.155.21 443/TCP 32m
-rabbitmq-system service/webhook-service ClusterIP 10.43.94.7 443/TCP 13m
-mariadb-operator service/mariadb-operator-webhook ClusterIP 10.43.145.93 443/TCP 13m
-nautobot service/nautobot-pods ClusterIP None 9m10s
-openstack service/rabbitmq-nodes ClusterIP None 4369/TCP,25672/TCP 9m10s
-openstack service/mariadb-internal ClusterIP None 3306/TCP 9m10s
-openstack service/rabbitmq ClusterIP 10.43.212.13 15672/TCP,15692/TCP,5672/TCP 9m10s
-openstack service/mariadb ClusterIP 10.43.138.242 3306/TCP 9m10s
-nautobot service/nautobot-default ClusterIP 10.43.76.64 443/TCP,80/TCP 9m10s
-nautobot service/nautobot-ha ClusterIP 10.43.107.193 5432/TCP 9m10s
-nautobot service/nautobot-primary ClusterIP None 5432/TCP 9m10s
-nautobot service/nautobot-replicas ClusterIP 10.43.154.155 5432/TCP 9m9s
-nautobot service/nautobot-ha-config ClusterIP None 9m9s
-argo service/argo-server ClusterIP 10.43.20.228 2746/TCP 9m9s
-argo-events service/events-webhook ClusterIP 10.43.41.70 443/TCP 9m9s
-openstack service/memcached-metrics ClusterIP 10.43.95.160 9150/TCP 8m55s
-openstack service/memcached ClusterIP 10.43.122.133 11211/TCP 8m55s
-nautobot service/nautobot-redis-headless ClusterIP None 6379/TCP 8m55s
-nautobot service/nautobot-redis-master ClusterIP 10.43.196.127 6379/TCP 8m55s
-
-NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
-kube-system daemonset.apps/svclb-traefik-986d0605 1 1 1 1 1 169m
-kube-system daemonset.apps/svclb-ingress-nginx-controller-169e70b9 1 1 0 1 0 81m
-
-NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
-kube-system deployment.apps/local-path-provisioner 1/1 1 1 169m
-kube-system deployment.apps/coredns 1/1 1 1 169m
-kube-system deployment.apps/traefik 1/1 1 1 169m
-kube-system deployment.apps/metrics-server 1/1 1 1 169m
-argocd deployment.apps/argo-cd-argocd-redis 1/1 1 1 81m
-argocd deployment.apps/argo-cd-argocd-repo-server 1/1 1 1 81m
-argocd deployment.apps/argo-cd-argocd-server 1/1 1 1 81m
-ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 81m
-cert-manager deployment.apps/cert-manager 1/1 1 1 32m
-cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 32m
-cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 32m
-kube-system deployment.apps/sealed-secrets-controller 1/1 1 1 81m
-rabbitmq-system deployment.apps/rabbitmq-cluster-operator 1/1 1 1 13m
-rabbitmq-system deployment.apps/messaging-topology-operator 1/1 1 1 13m
-postgres-operator deployment.apps/pgo 1/1 1 1 13m
-mariadb-operator deployment.apps/mariadb-operator 1/1 1 1 13m
-mariadb-operator deployment.apps/mariadb-operator-webhook 1/1 1 1 13m
-mariadb-operator deployment.apps/mariadb-operator-cert-controller 1/1 1 1 13m
-argo deployment.apps/workflow-controller 1/1 1 1 9m9s
-argo-events deployment.apps/events-webhook 1/1 1 1 9m9s
-argo-events deployment.apps/controller-manager 1/1 1 1 9m9s
-argo deployment.apps/argo-server 1/1 1 1 9m9s
-openstack deployment.apps/memcached 1/1 1 1 8m55s
-nautobot deployment.apps/nautobot-celery-default 1/1 1 1 9m10s
-nautobot deployment.apps/nautobot-default 2/2 2 2 9m10s
-nautobot deployment.apps/nautobot-celery-beat 1/1 1 1 9m10s
-
-NAMESPACE NAME DESIRED CURRENT READY AGE
-kube-system replicaset.apps/local-path-provisioner-84db5d44d9 1 1 1 169m
-kube-system replicaset.apps/coredns-6799fbcd5 1 1 1 169m
-kube-system replicaset.apps/traefik-f4564c4f4 1 1 1 169m
-kube-system replicaset.apps/metrics-server-67c658944b 1 1 1 169m
-argocd replicaset.apps/argo-cd-argocd-redis-58779b9ddf 1 1 1 81m
-kube-system replicaset.apps/sealed-secrets-controller-58bfb4d565 1 1 1 81m
-argocd replicaset.apps/argo-cd-argocd-repo-server-889b6979c 1 1 1 81m
-argocd replicaset.apps/argo-cd-argocd-server-7c665bdb99 1 1 1 81m
-ingress-nginx replicaset.apps/ingress-nginx-controller-6858749594 1 1 1 81m
-cert-manager replicaset.apps/cert-manager-5c9d8879fd 1 1 1 32m
-cert-manager replicaset.apps/cert-manager-cainjector-6cc9b5f678 1 1 1 32m
-cert-manager replicaset.apps/cert-manager-webhook-7bb7b75848 1 1 1 32m
-rabbitmq-system replicaset.apps/rabbitmq-cluster-operator-ccf488f4c 1 1 1 13m
-rabbitmq-system replicaset.apps/messaging-topology-operator-85486d7848 1 1 1 13m
-postgres-operator replicaset.apps/pgo-6d794c46cf 1 1 1 13m
-mariadb-operator replicaset.apps/mariadb-operator-5644c8d7df 1 1 1 13m
-mariadb-operator replicaset.apps/mariadb-operator-webhook-74f4b57d9d 1 1 1 13m
-mariadb-operator replicaset.apps/mariadb-operator-cert-controller-6586cb7db6 1 1 1 13m
-argo replicaset.apps/workflow-controller-954b4d959 1 1 1 9m9s
-argo-events replicaset.apps/events-webhook-984788f96 1 1 1 9m9s
-argo-events replicaset.apps/controller-manager-5d97d79554 1 1 1 9m9s
-argo replicaset.apps/argo-server-5df77fdc67 1 1 1 9m9s
-openstack replicaset.apps/memcached-56458c6c9c 1 1 1 8m55s
-nautobot replicaset.apps/nautobot-celery-default-545d857c5 1 1 1 9m10s
-nautobot replicaset.apps/nautobot-default-844d45bf7 2 2 2 9m10s
-nautobot replicaset.apps/nautobot-celery-beat-7764fb8b6c 1 1 1 9m10s
-
-NAMESPACE NAME READY AGE
-argocd statefulset.apps/argo-cd-argocd-application-controller 1/1 81m
-nautobot statefulset.apps/nautobot-repo-host 1/1 9m9s
-nautobot statefulset.apps/nautobot-instance1-v8mc 1/1 9m9s
-openstack statefulset.apps/mariadb 1/1 9m10s
-nautobot statefulset.apps/nautobot-redis-master 1/1 8m55s
-openstack statefulset.apps/rabbitmq-server 1/1 9m9s
-
-NAMESPACE NAME COMPLETIONS DURATION AGE
-kube-system job.batch/helm-install-traefik-crd 1/1 10s 169m
-kube-system job.batch/helm-install-traefik 1/1 13s 169m
-ingress-nginx job.batch/ingress-nginx-admission-create 1/1 6s 81m
-ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 7s 81m
-nautobot job.batch/nautobot-backup-sg6f 1/1 2m48s 8m45s
-```
-
-## Install UnderStack Components
-
-[https://github.com/rackerlabs/understack/blob/main/components/keystone/README.md](https://github.com/rackerlabs/understack/blob/main/components/keystone/README.md)
-
-### OpenStack Pre-requisites
-
-```bash
-# add the OpenStack Helm repo we can install from
-helm repo add osh https://tarballs.opendev.org/openstack/openstack-helm/
-```
-
-Load the secrets values file from the cluster:
-
-```bash
-./scripts/gen-os-secrets.sh secret-openstack.yaml
-```
-
-Label the kubernetes nodes as being openstack enabled:
-
-```bash
-kubectl label node $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') openstack-control-plane=enabled
-```
-
-### Keystone
-
-Install keystone:
-
-```bash
-helm --namespace openstack install \
- keystone \
- osh/keystone \
- -f components/openstack-2024.1-jammy.yaml \
- -f components/keystone/aio-values.yaml \
- -f secret-openstack.yaml
-```
-
-Install the openstack admin client:
-
-```bash
-kubectl -n openstack apply -f https://raw.githubusercontent.com/rackerlabs/genestack/main/manifests/utils/utils-openstack-client-admin.yaml
-```
-
-Test if it's working:
-
-```bash
-kubectl exec -it openstack-admin-client -n openstack -- openstack catalog list
-kubectl exec -it openstack-admin-client -n openstack -- openstack service list
-```
-
-References:
-
-* [https://github.com/rackerlabs/understack/blob/main/components/keystone/README.md](https://github.com/rackerlabs/understack/blob/main/components/keystone/README.md)
-
-### Ironic
-
-First we need to update the `./components/ironic/aio-values.yaml` file and adjust a
-setting to match our environment.
-
-Change the network.pxe.device to be the network device on the physical host you'll
-use for pxe network, for example a different network layout may use `eno2` for pxe.
-
-```yaml
-network:
- pxe:
- device: ens1f0
-```
-
-Install the OpenStack Ironic helm chart using our custom aio-values.yaml overrides:
-
-```bash
-helm --namespace openstack template \
- ironic \
- osh/ironic/ \
- -f components/ironic/aio-values.yaml \
- -f secret-openstack.yaml \
- | kubectl -n openstack apply -f -
-```
-
-Check if it's working:
-
-```bash
-kubectl exec -it openstack-admin-client -n openstack -- openstack baremetal driver list
-kubectl exec -it openstack-admin-client -n openstack -- openstack baremetal conductor list
-```
-
-If everything is working, you should see output similar to the following:
-
-```bash
-# kubectl exec -it openstack-admin-client -n openstack -- openstack baremetal conductor list
-+---------------------------------------------+-----------------+-------+
-| Hostname | Conductor Group | Alive |
-+---------------------------------------------+-----------------+-------+
-| 915966-utility01-ospcv2-iad.openstack.local | | True |
-+---------------------------------------------+-----------------+-------+
-```
-
-References:
-
-* [https://github.com/rackerlabs/understack/blob/main/components/ironic/README.md](https://github.com/rackerlabs/understack/blob/main/components/ironic/README.md)
diff --git a/mkdocs.yml b/mkdocs.yml
index faae47d4c..62793f5b6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -113,7 +113,6 @@ nav:
- 'Deployment Guide':
- deploy-guide/index.md
- Quick Start: deploy-guide/gitops-install.md
- - deploy-guide/install-understack-ubuntu-k3s.md
- deploy-guide/auth.md
- deploy-guide/extra-regions.md
- deploy-guide/external-argocd.md