diff --git a/azure/defaults.go b/azure/defaults.go index bfd4b5311297..81605a0cc772 100644 --- a/azure/defaults.go +++ b/azure/defaults.go @@ -53,7 +53,7 @@ const ( const ( // DefaultWindowsOsAndVersion is the default Windows Server version to use when - // genearating default images for Windows nodes. + // generating default images for Windows nodes. DefaultWindowsOsAndVersion = "windows-2019" ) diff --git a/docs/book/src/topics/externally-managed-azure-infrastructure.md b/docs/book/src/topics/externally-managed-azure-infrastructure.md index 82549a5e306d..a472b4880cfb 100644 --- a/docs/book/src/topics/externally-managed-azure-infrastructure.md +++ b/docs/book/src/topics/externally-managed-azure-infrastructure.md @@ -1,6 +1,6 @@ # Externally managed Azure infrastructure -Normally, Cluster API will create infrastructure on Azure when standing up a new workload cluster. However, it is possible to have Cluster API re-use existing Azure infrastructure instead of creating its own infrastructure. +Normally, Cluster API will create infrastructure on Azure when standing up a new workload cluster. However, it is possible to have Cluster API reuse existing Azure infrastructure instead of creating its own infrastructure. CAPZ supports [externally managed cluster infrastructure](https://github.com/kubernetes-sigs/cluster-api/blob/10d89ceca938e4d3d94a1d1c2b60515bcdf39829/docs/proposals/20210203-externally-managed-cluster-infrastructure.md). If the `AzureCluster` resource includes a "cluster.x-k8s.io/managed-by" annotation then the [controller will skip any reconciliation](https://cluster-api.sigs.k8s.io/developer/providers/cluster-infrastructure.html#normal-resource). diff --git a/docs/proposals/20200720-single-controller-multitenancy.md b/docs/proposals/20200720-single-controller-multitenancy.md index 834205649615..1dccbc24796b 100644 --- a/docs/proposals/20200720-single-controller-multitenancy.md +++ b/docs/proposals/20200720-single-controller-multitenancy.md @@ -30,7 +30,7 @@ superseded-by: [] - [User Stories](#user-stories) - [Story 1](#story-1---locked-down-with-service-principal-per-subscription) - [Story 2](#story-2---locked-down-by-namespace-and-subscription) - - [Story 3](#story-3---using-an-azure-user-assigned-identity) + - [Story 3](#story-3---using-an-azureuser-assigned-identity) - [Story 4](#story-4---legacy-behavior-preserved) - [Story 5](#story-5---software-as-a-service-provider) - [Requirements](#requirements) diff --git a/docs/proposals/20201214-bootstrap-failure-detection.md b/docs/proposals/20201214-bootstrap-failure-detection.md index 81ff6c5107ba..75ab022ba50c 100644 --- a/docs/proposals/20201214-bootstrap-failure-detection.md +++ b/docs/proposals/20201214-bootstrap-failure-detection.md @@ -119,7 +119,7 @@ A few conclusions surfaced when exploring these options: 2. The actual implementation that determines “did I bootstrap successfully?” should be defined by each bootstrap provider, as each provider has its own files/operational conditions to validate. The validation on the Azure side should be as minimal as possible and delegate all responsibility of running checks to the bootstrap provider. 3. We need to support Linux and Windows, and though there is one convenience (VM Boot Diagnostics) that may allow us to get a common result across both OSes “for free”, in practice there is enough heterogeneity at all layers (VM, OS, potentially even capi) that we should expect to have to maintain a discrete set of implementations for each platform. So we want to choose a solution that makes supporting both Linux and Windows distinctly natural. -The most sensible solution would be to re-use the existing CustomScriptExtension interface that can be attached to both Windows and Linux VMs. But the fact that VMs may only support a single CustomScriptExtension is a non-trivial problem, as it removes that configuration vector for users. That vector can be a powerful configuration option — paired with custom OS images — to deliver regular runtime functionality to the underlying Azure VM running as a Kubernetes node. In particular during emergency scenarios being able to “patch” your node’s Azure VM implementation quickly using this interface can save a user many hours if he/she had to otherwise wait for a new OS image, or worse, a new VHD publication. +The most sensible solution would be to reuse the existing CustomScriptExtension interface that can be attached to both Windows and Linux VMs. But the fact that VMs may only support a single CustomScriptExtension is a non-trivial problem, as it removes that configuration vector for users. That vector can be a powerful configuration option — paired with custom OS images — to deliver regular runtime functionality to the underlying Azure VM running as a Kubernetes node. In particular during emergency scenarios being able to “patch” your node’s Azure VM implementation quickly using this interface can save a user many hours if he/she had to otherwise wait for a new OS image, or worse, a new VHD publication. So, given that we don’t want to “reserve” the CustomScriptExtension VM interface for capz, thus preventing users from using it more generically and flexibly (as it’s intended to be used), we want to propose curating a capz-specific Azure VM Extension dedicated to running on the VM during provisioning and evaluating the success/fail state of its bootstrap operation(s) towards joining a capz-enabled Kubernetes cluster.