Skip to content

Commit

Permalink
add docs on migrating to new ASO API
Browse files Browse the repository at this point in the history
  • Loading branch information
nojnhuh committed Nov 22, 2024
1 parent 78e1385 commit d711276
Show file tree
Hide file tree
Showing 3 changed files with 36 additions and 6 deletions.
2 changes: 1 addition & 1 deletion .markdownlinkcheck.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"ignorePatterns": [
{ "pattern": "^https://calendar.google.com/calendar" },
{ "pattern": "^../reference/" }
{ "pattern": "^\.\.?/" }
],
"httpHeaders": [{
"comment": "Workaround as suggested here: https://github.com/tcort/markdown-link-check/issues/201",
Expand Down
11 changes: 6 additions & 5 deletions docs/book/src/managed/adopting-clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

### Option 1: Using the new AzureASOManaged API

<!-- markdown-link-check-disable-next-line -->
The [AzureASOManagedControlPlane and related APIs](./asomanagedcluster.md) support
adoption as a first-class use case. Going forward, this method is likely to be easier, more reliable, include
more features, and better supported for adopting AKS clusters than Option 2 below.
Expand All @@ -15,10 +14,10 @@ and AzureASOManagedMachinePools. The [`asoctl import
azure-resource`](https://azure.github.io/azure-service-operator/tools/asoctl/#import-azure-resource) command
can help generate the required YAML.

Caveats:
- The `asoctl import azure-resource` command has at least [one known
bug](https://github.com/Azure/azure-service-operator/issues/3805) requiring the YAML it generates to be
edited before it can be applied to a cluster.
This method can also be used to [migrate](./asomanagedcluster#migrating-existing-clusters-to-azureasomanagedcontrolplane) from AzureManagedControlPlane and its associated APIs.

#### Caveats

- CAPZ currently only records the ASO resources in the CAPZ resources' `spec.resources` that it needs to
function, which include the ManagedCluster, its ResourceGroup, and associated ManagedClustersAgentPools.
Other resources owned by the ManagedCluster like Kubernetes extensions or Fleet memberships are not
Expand All @@ -29,6 +28,8 @@ Caveats:
- Adopting existing clusters created with the GA AzureManagedControlPlane API to the experimental API with
this method is theoretically possible, but untested. Care should be taken to prevent CAPZ from reconciling
two different representations of the same underlying Azure resources.
- This method cannot be used to import existing clusters as a ClusterClass or a topology, only as a standalone
Cluster.

### Option 2: Using the current AzureManagedControlPlane API

Expand Down
29 changes: 29 additions & 0 deletions docs/book/src/managed/asomanagedcluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,3 +89,32 @@ spec:
name: ${CLUSTER_NAME}-user-kubeconfig # NOT ${CLUSTER_NAME}-kubeconfig
key: value
```

### Migrating existing Clusters to AzureASOManagedControlPlane

Existing CAPI Clusters using the AzureManagedControlPlane and associated APIs can be migrated to use the new
AzureASOManagedControlPlane and its associated APIs. This process relies on CAPZ's ability to
[adopt](./adopting-clusters#option-1-using-the-new-azureasomanaged-api) existing clusters that may not have
been created by CAPZ, which comes with some [caveats](./adopting-clusters#caveats) that should be reviewed first.

To migrate one cluster to the ASO-based APIs:

1. Pause the cluster by setting the Cluster's `spec.paused` to `true`.
1. Wait for the cluster to be paused by waiting for the _absence_ of the `clusterctl.cluster.x-k8s.io/block-move`
annotation on the AzureManagedControlPlane and its AzureManagedMachinePools. This should be fairly instantaneous.
1. Create a new namespace to contain the new resources to avoid conflicting ASO definitions.
1. [Adopt](./adopting-clusters#option-1-using-the-new-azureasomanaged-api) the underlying AKS resources from
the new namespace, which creates the new CAPI and CAPZ resources.
1. Forcefully delete the old Cluster. This is more complicated than normal because CAPI controllers do not reconcile
paused resources at all, even when they are deleted. The underlying Azure resources will not be affected.
- Delete the cluster: `kubectl delete cluster <name> --wait=false`
- Delete the cluster infrastructure object: `kubectl delete azuremanagedcluster <name> --wait=false`
- Delete the cluster control plane object: `kubectl delete azuremanagedcontrolplane <name> --wait=false`
- Delete the machine pools: `kubectl delete machinepool <names...> --wait=false`
- Delete the machine pool infrastructure resources: `kubectl delete azuremanagedmachinepool <names...> --wait=false`
- Remove finalizers from the machine pool infrastructure resources: `kubectl patch azuremanagedmachinepool <names...> --type merge -p '{"metadata": {"finalizers": null}}'`
- Remove finalizers from the machine pools: `kubectl patch machinepool <names...> --type merge -p '{"metadata": {"finalizers": null}}'`
- Remove finalizers from the cluster control plane object: `kubectl patch azuremanagedcontrolplane <name> --type merge -p '{"metadata": {"finalizers": null}}'`
- Note: the cluster infrastructure object should not have any finalizers and should already be deleted
- Remove finalizers from the cluster: `kubectl patch cluster <name> --type merge -p '{"metadata": {"finalizers": null}}'`
- Verify the old ASO resources like ResourceGroup and ManagedCluster managed by the old Cluster are deleted.

0 comments on commit d711276

Please sign in to comment.