From c2f39d921d5cfb9eba7f21f10be008463dbc7b7f Mon Sep 17 00:00:00 2001 From: EKS Distro PR Bot <75336432+eks-distro-pr-bot@users.noreply.github.com> Date: Thu, 31 Oct 2024 18:10:25 -0400 Subject: [PATCH] Update docs for latest EKS-A v0.21 and kubernetes v0.31 (#8947) Co-authored-by: Saurabh Parekh --- .../cluster-backup-restore/backup-cluster.md | 2 +- .../cluster-backup-restore/restore-cluster.md | 2 +- docs/content/en/docs/clustermgmt/cluster-flux.md | 4 ++-- .../en/docs/clustermgmt/cluster-terraform.md | 4 ++-- .../cluster-upgrades/baremetal-upgrades.md | 8 ++++---- .../cluster-upgrades/upgrade-overview.md | 2 +- .../clustermgmt/cluster-upgrades/version-skew.md | 4 ++-- .../vsphere-and-cloudstack-upgrades.md | 4 ++-- .../clustermgmt/security/cluster-iam-auth.md | 4 ++-- .../content/en/docs/concepts/support-versions.md | 2 +- .../getting-started/cloudstack/cloud-spec.md | 2 +- .../en/docs/getting-started/optional/etcd.md | 2 +- docs/content/en/docs/packages/prereq.md | 4 ++-- docs/content/en/docs/whatsnew/changelog.md | 5 ++++- .../en/docs/workloadmgmt/gpu-sample-cluster.md | 2 +- docs/content/en/docs/workloadmgmt/using-gpus.md | 2 +- docs/data/version_support.yml | 16 ++++++++-------- docs/developer/manifests.md | 6 +++--- 18 files changed, 39 insertions(+), 36 deletions(-) diff --git a/docs/content/en/docs/clustermgmt/cluster-backup-restore/backup-cluster.md b/docs/content/en/docs/clustermgmt/cluster-backup-restore/backup-cluster.md index 488da894de0f..7c78c81c2555 100644 --- a/docs/content/en/docs/clustermgmt/cluster-backup-restore/backup-cluster.md +++ b/docs/content/en/docs/clustermgmt/cluster-backup-restore/backup-cluster.md @@ -47,7 +47,7 @@ MGMT_CLUSTER_KUBECONFIG=${MGMT_CLUSTER}/${MGMT_CLUSTER}-eks-a-cluster.kubeconfig BACKUP_DIRECTORY=backup-mgmt # Substitute the EKS Anywhere release version with whatever CLI version you are using -EKSA_RELEASE_VERSION=v0.17.3 +EKSA_RELEASE_VERSION=v0.21.0 BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl") CLI_TOOLS_IMAGE=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksa.cliTools.uri") diff --git a/docs/content/en/docs/clustermgmt/cluster-backup-restore/restore-cluster.md b/docs/content/en/docs/clustermgmt/cluster-backup-restore/restore-cluster.md index 1eeabcf59af1..36c8986e252f 100644 --- a/docs/content/en/docs/clustermgmt/cluster-backup-restore/restore-cluster.md +++ b/docs/content/en/docs/clustermgmt/cluster-backup-restore/restore-cluster.md @@ -301,7 +301,7 @@ systemctl restart kubelet ```bash # Substitute the EKS Anywhere release version with whatever CLI version you are using - EKSA_RELEASE_VERSION=v0.18.3 + EKSA_RELEASE_VERSION=v0.21.0 BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl") CLI_TOOLS_IMAGE=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksa.cliTools.uri") diff --git a/docs/content/en/docs/clustermgmt/cluster-flux.md b/docs/content/en/docs/clustermgmt/cluster-flux.md index 01f6635248fc..ca71832c56c5 100755 --- a/docs/content/en/docs/clustermgmt/cluster-flux.md +++ b/docs/content/en/docs/clustermgmt/cluster-flux.md @@ -353,7 +353,7 @@ Follow these steps if you want to use your initial cluster to create and manage ### Upgrade cluster using Gitops 1. To upgrade the cluster using Gitops, modify the workload cluster yaml file with the desired changes. - As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec: + As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec: ```bash apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster @@ -369,7 +369,7 @@ Follow these steps if you want to use your initial cluster to create and manage kind: VSphereMachineConfig name: dev ... - kubernetesVersion: "1.25" + kubernetesVersion: "1.31" ... ``` diff --git a/docs/content/en/docs/clustermgmt/cluster-terraform.md b/docs/content/en/docs/clustermgmt/cluster-terraform.md index dc3b286285c5..502be22551ff 100644 --- a/docs/content/en/docs/clustermgmt/cluster-terraform.md +++ b/docs/content/en/docs/clustermgmt/cluster-terraform.md @@ -188,7 +188,7 @@ Follow these steps if you want to use your initial cluster to create and manage ### Upgrade cluster using Terraform 1. To upgrade a workload cluster using Terraform, modify the desired fields in the Terraform resource file. - As an example, to upgrade a cluster with version 1.24 to 1.25 you would modify your Terraform cluster resource: + As an example, to upgrade a cluster with version 1.30 to 1.31 you would modify your Terraform cluster resource: ```bash manifest = { "apiVersion" = "anywhere.eks.amazonaws.com/v1alpha1" @@ -198,7 +198,7 @@ Follow these steps if you want to use your initial cluster to create and manage "namespace" = "default" } "spec" = { - "kubernetesVersion" = "1.25" + "kubernetesVersion" = "1.31" ... ... } diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md index f81e3d8cb355..7caa084603e6 100755 --- a/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md +++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/baremetal-upgrades.md @@ -22,7 +22,7 @@ description: > - It is highly recommended to run the `eksctl anywhere upgrade cluster` command with the `--no-timeouts` option when the command is executed through automation. This prevents the CLI from timing out and enables cluster operators to fix issues preventing the upgrade from completing while the process is running. - In EKS Anywhere version `v0.15.0`, we introduced the EKS Anywhere cluster lifecycle controller that runs on management clusters and manages workload clusters. The EKS Anywhere lifecycle controller enables you to use Kubernetes API-compatible clients such as `kubectl`, GitOps, or Terraform for managing workload clusters. In this EKS Anywhere version, the EKS Anywhere cluster lifecycle controller rolls out new nodes in workload clusters when management clusters are upgraded. In EKS Anywhere version `v0.16.0`, this behavior was changed such that management clusters can be upgraded separately from workload clusters. - When running workload cluster upgrades after upgrading a management cluster, a machine rollout may be triggered on workload clusters during the workload cluster upgrade, even if the changes to the workload cluster spec didn't require one (for example scaling down a worker node group). -- Starting with EKS Anywhere `v0.18.0`, the `osImageURL` must include the Kubernetes minor version (`Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` in the cluster spec). For example, if the Kubernetes version is 1.29, the `osImageURL` must include 1.29, 1_29, 1-29 or 129. If you are upgrading Kubernetes versions, you must have a new OS image with your target Kubernetes version components. +- Starting with EKS Anywhere `v0.18.0`, the `osImageURL` must include the Kubernetes minor version (`Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` in the cluster spec). For example, if the Kubernetes version is 1.31, the `osImageURL` must include 1.31, 1_31, 1-31 or 131. If you are upgrading Kubernetes versions, you must have a new OS image with your target Kubernetes version components. - If you are running EKS Anywhere in an airgapped environment, you must download the new artifacts and images prior to initiating the upgrade. Reference the [Airgapped Upgrades page]({{< relref "./airgapped-upgrades" >}}) page for more information. ### Upgrade Version Skew @@ -88,7 +88,7 @@ If you don't have any available hardware that match this requirement in the clus To perform a cluster upgrade you can modify your cluster specification `kubernetesVersion` field to the desired version. -As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec as follows: +As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec as follows: ``` apiVersion: anywhere.eks.amazonaws.com/v1alpha1 @@ -104,7 +104,7 @@ spec: kind: TinkerbellMachineConfig name: dev ... - kubernetesVersion: "1.25" + kubernetesVersion: "1.31" ... ``` @@ -249,7 +249,7 @@ spec: datacenterRef: kind: TinkerbellDatacenterConfig name: my-cluster-name - kubernetesVersion: "1.25" + kubernetesVersion: "1.31" managementCluster: name: my-cluster-name workerNodeGroupConfigurations: diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/upgrade-overview.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/upgrade-overview.md index aadb27e71703..6a6caa1ffad5 100644 --- a/docs/content/en/docs/clustermgmt/cluster-upgrades/upgrade-overview.md +++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/upgrade-overview.md @@ -35,7 +35,7 @@ Each EKS Anywhere version includes all components required to create and manage - Management components (Cluster API controller, EKS Anywhere controller, provider-specific controllers) - Cluster components (Kubernetes, Cilium) -You can find details about each EKS Anywhere releases in the EKS Anywhere release manifest. The release manifest contains references to the corresponding bundle manifest for each EKS Anywhere version. Within the bundle manifest, you will find the components included in a specific EKS Anywhere version. The images running in your deployment use the same URI values specified in the bundle manifest for that component. For example, see the [bundle manifest](https://anywhere-assets.eks.amazonaws.com/releases/bundles/59/manifest.yaml) for EKS Anywhere version `v0.18.7`. +You can find details about each EKS Anywhere releases in the EKS Anywhere release manifest. The release manifest contains references to the corresponding bundle manifest for each EKS Anywhere version. Within the bundle manifest, you will find the components included in a specific EKS Anywhere version. The images running in your deployment use the same URI values specified in the bundle manifest for that component. For example, see the [bundle manifest](https://anywhere-assets.eks.amazonaws.com/releases/bundles/81/manifest.yaml) for EKS Anywhere version `v0.21.0`. To upgrade the EKS Anywhere version of a management or standalone cluster, you install a new version of the `eksctl anywhere` CLI, change the `eksaVersion` field in your management or standalone cluster's spec yaml, and then run the `eksctl anywhere upgrade management-components -f cluster.yaml` (as of EKS Anywhere version v0.19) or `eksctl anywhere upgrade cluster -f cluster.yaml` command. The `eksctl anywhere upgrade cluster` command upgrades both management and cluster components. diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/version-skew.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/version-skew.md index b60e7964a0bb..0c7680884b3d 100644 --- a/docs/content/en/docs/clustermgmt/cluster-upgrades/version-skew.md +++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/version-skew.md @@ -6,6 +6,6 @@ There are a few dimensions of versioning to consider in your EKS Anywhere deploy - **Management clusters to workload clusters**: Management clusters can be at most 1 EKS Anywhere minor version greater than the EKS Anywhere version of workload clusters. Workload clusters cannot have an EKS Anywhere version greater than management clusters. - **Management components to cluster components**: Management components can be at most 1 EKS Anywhere minor version greater than the EKS Anywhere version of cluster components. -- **EKS Anywhere version upgrades**: Skipping EKS Anywhere minor versions during upgrade is not supported (`v0.17.x` to `v0.19.x`). We recommend you upgrade one EKS Anywhere minor version at a time (`v0.17.x` to `v0.18.x` to `v0.19.x`). -- **Kubernetes version upgrades**: Skipping Kubernetes minor versions during upgrade is not supported (`v1.26.x` to `v1.28.x`). You must upgrade one Kubernetes minor version at a time (`v1.26.x` to `v1.27.x` to `v1.28.x`). +- **EKS Anywhere version upgrades**: Skipping EKS Anywhere minor versions during upgrade is not supported (`v0.19.x` to `v0.21.x`). We recommend you upgrade one EKS Anywhere minor version at a time (`v0.19.x` to `v0.20.x` to `v0.21.x`). +- **Kubernetes version upgrades**: Skipping Kubernetes minor versions during upgrade is not supported (`v1.29.x` to `v1.31.x`). You must upgrade one Kubernetes minor version at a time (`v1.29.x` to `v1.30.x` to `v1.31.x`). - **Kubernetes control plane and worker nodes**: As of Kubernetes v1.28, worker nodes can be up to 3 minor versions lower than the Kubernetes control plane minor version. In earlier Kubernetes versions, worker nodes could be up to 2 minor versions lower than the Kubernetes control plane minor version. \ No newline at end of file diff --git a/docs/content/en/docs/clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md b/docs/content/en/docs/clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md index 642e8c0586d1..f3288647fa8d 100755 --- a/docs/content/en/docs/clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md +++ b/docs/content/en/docs/clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md @@ -72,7 +72,7 @@ To the format output in json, add `-o json` to the end of the command line. To perform a cluster upgrade you can modify your cluster specification `kubernetesVersion` field to the desired version. -As an example, to upgrade a cluster with version 1.26 to 1.27 you would change your spec +As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec ``` apiVersion: anywhere.eks.amazonaws.com/v1alpha1 @@ -88,7 +88,7 @@ spec: kind: VSphereMachineConfig name: dev ... - kubernetesVersion: "1.27" + kubernetesVersion: "1.31" ... ``` diff --git a/docs/content/en/docs/clustermgmt/security/cluster-iam-auth.md b/docs/content/en/docs/clustermgmt/security/cluster-iam-auth.md index e4035d553664..63d57ba9d46a 100644 --- a/docs/content/en/docs/clustermgmt/security/cluster-iam-auth.md +++ b/docs/content/en/docs/clustermgmt/security/cluster-iam-auth.md @@ -82,10 +82,10 @@ ${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-aws.kubeconfig 1. Ensure the IAM role/user ARN mapped in the cluster is configured on the local machine from which you are trying to access the cluster. 2. Install the `aws-iam-authenticator client` binary on the local machine. * We recommend installing the binary referenced in the latest `release manifest` of the kubernetes version used when creating the cluster. - * The below commands can be used to fetch the installation uri for clusters created with `1.27` kubernetes version and OS `linux`. + * The below commands can be used to fetch the installation uri for clusters created with `1.31` kubernetes version and OS `linux`. ```bash CLUSTER_NAME=my-cluster-name - KUBERNETES_VERSION=1.27 + KUBERNETES_VERSION=1.31 export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig diff --git a/docs/content/en/docs/concepts/support-versions.md b/docs/content/en/docs/concepts/support-versions.md index f85b46b1aa68..d7b029dd269f 100644 --- a/docs/content/en/docs/concepts/support-versions.md +++ b/docs/content/en/docs/concepts/support-versions.md @@ -56,7 +56,7 @@ Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL) can be used as operati |------------|------------------------------|---------------------------------| | Ubuntu | 22.04 | 0.17 and above | | 20.04 | 0.5 and above -| Bottlerocket | 1.22.0 | 0.21 +| Bottlerocket | 1.26.1 | 0.21 | | 1.20.0 | 0.20 | | 1.19.1 | 0.19 | | 1.15.1 | 0.18 diff --git a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md index 41e03dae3244..9c318ff2d1ae 100644 --- a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md +++ b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md @@ -58,7 +58,7 @@ spec: machineGroupRef: kind: CloudStackMachineConfig name: my-cluster-name-etcd - kubernetesVersion: "1.28" + kubernetesVersion: "1.31" managementCluster: name: my-cluster-name workerNodeGroupConfigurations: diff --git a/docs/content/en/docs/getting-started/optional/etcd.md b/docs/content/en/docs/getting-started/optional/etcd.md index b3e4c6eb6633..05b01710b5dd 100644 --- a/docs/content/en/docs/getting-started/optional/etcd.md +++ b/docs/content/en/docs/getting-started/optional/etcd.md @@ -61,7 +61,7 @@ spec: machineGroupRef: kind: VSphereMachineConfig name: my-cluster-name-etcd - kubernetesVersion: "1.27" + kubernetesVersion: "1.31" workerNodeGroupConfigurations: - count: 1 machineGroupRef: diff --git a/docs/content/en/docs/packages/prereq.md b/docs/content/en/docs/packages/prereq.md index d646f1f55685..9da844f77e67 100644 --- a/docs/content/en/docs/packages/prereq.md +++ b/docs/content/en/docs/packages/prereq.md @@ -155,7 +155,7 @@ You can get a list of the available packages from the command line: ```bash export CLUSTER_NAME= export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig -eksctl anywhere list packages --kube-version 1.27 +eksctl anywhere list packages --kube-version 1.31 ``` Example command output: @@ -181,5 +181,5 @@ The example shows how to install the `harbor` package from the [curated package ```bash export CLUSTER_NAME= -eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.27 > harbor-spec.yaml +eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.31 > harbor-spec.yaml ``` diff --git a/docs/content/en/docs/whatsnew/changelog.md b/docs/content/en/docs/whatsnew/changelog.md index 5714a29ec724..fc2150ed4e39 100644 --- a/docs/content/en/docs/whatsnew/changelog.md +++ b/docs/content/en/docs/whatsnew/changelog.md @@ -52,7 +52,7 @@ description: > - GPU support for Nutanix provider ([#8745](https://github.com/aws/eks-anywhere/pull/8745)) - Support for worker nodes failure domains on Nutanix ([#8837](https://github.com/aws/eks-anywhere/pull/8837)) -### Changed +### Upgraded - Added EKS-D for 1-31: - [`v1-31-eks-6`](https://distro.eks.amazonaws.com/releases/1-31/6/) - Cert Manager: `v1.14.7` to `v1.15.3` @@ -71,6 +71,9 @@ description: > - Hook: `v0.8.1` to `v0.9.1` - Troubleshoot: `v0.93.2` to `v0.107.4` +### Changed +- Use HookOS embedded images in Tinkerbell Templates by default ([#8708](https://github.com/aws/eks-anywhere/pull/8708) and [#3471](https://github.com/aws/eks-anywhere-build-tooling/pull/3471)) + ### Removed - Support for Kubernetes v1.26 diff --git a/docs/content/en/docs/workloadmgmt/gpu-sample-cluster.md b/docs/content/en/docs/workloadmgmt/gpu-sample-cluster.md index ef544b97ce2a..5c7d0c1010f8 100644 --- a/docs/content/en/docs/workloadmgmt/gpu-sample-cluster.md +++ b/docs/content/en/docs/workloadmgmt/gpu-sample-cluster.md @@ -26,7 +26,7 @@ toc_hide: true datacenterRef: kind: TinkerbellDatacenterConfig name: gpu-test - kubernetesVersion: "1.27" + kubernetesVersion: "1.31" --- apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: TinkerbellDatacenterConfig diff --git a/docs/content/en/docs/workloadmgmt/using-gpus.md b/docs/content/en/docs/workloadmgmt/using-gpus.md index c5e011f91e43..4fa16392350f 100644 --- a/docs/content/en/docs/workloadmgmt/using-gpus.md +++ b/docs/content/en/docs/workloadmgmt/using-gpus.md @@ -9,7 +9,7 @@ description: > The [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html) allows GPUs to be exposed to applications in Kubernetes clusters much like CPUs. Instead of provisioning a special OS image for GPU nodes with the required drivers and dependencies, a standard OS image can be used for both CPU and GPU nodes. The NVIDIA GPU Operator can be used to provision the required software components for GPUs such as the NVIDIA drivers, Kubernetes device plugin for GPUs, and the NVIDIA Container Toolkit. See the [licensing section](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html#licenses-and-contributing) of the NVIDIA GPU Operator documentation for information on the NVIDIA End User License Agreements. -In the example on this page, a single-node EKS Anywhere cluster on bare metal is used with an Ubuntu 20.04 image produced from image-builder without modifications and Kubernetes version 1.27. +In the example on this page, a single-node EKS Anywhere cluster on bare metal is used with an Ubuntu 20.04 image produced from image-builder without modifications and Kubernetes version 1.31. ### 1. Configure an EKS Anywhere cluster spec and hardware inventory diff --git a/docs/data/version_support.yml b/docs/data/version_support.yml index c6a48e974dbd..7e13a2050976 100644 --- a/docs/data/version_support.yml +++ b/docs/data/version_support.yml @@ -21,7 +21,7 @@ # receiving_patches: Whether or not the release is receiving patches. eksa: - version: '0.21' - released: 2024-10-31 + released: 2024-10-30 kube_versions: ['1.31', '1.30', '1.29', '1.28', '1.27'] receiving_patches: true @@ -121,31 +121,31 @@ eksa: kube: - version: '1.31' releasedIn: '0.21' - expectedEndOfLifeDate: 2025-10-23 + expectedEndOfLifeDate: 2025-12-31 - version: '1.30' releasedIn: '0.20' - expectedEndOfLifeDate: 2025-06-23 + expectedEndOfLifeDate: 2025-08-31 - version: '1.29' releasedIn: '0.19' - expectedEndOfLifeDate: 2025-03-23 + expectedEndOfLifeDate: 2025-04-30 - version: '1.28' releasedIn: '0.18' - expectedEndOfLifeDate: 2024-12-01 + expectedEndOfLifeDate: 2024-12-31 - version: '1.27' releasedIn: '0.16' - expectedEndOfLifeDate: 2024-08-01 + expectedEndOfLifeDate: 2024-08-31 - version: '1.26' releasedIn: '0.15' - expectedEndOfLifeDate: 2024-06-01 + expectedEndOfLifeDate: 2024-05-31 - version: '1.25' releasedIn: '0.14' - expectedEndOfLifeDate: 2024-05-01 + expectedEndOfLifeDate: 2024-03-31 - version: '1.24' releasedIn: '0.12' diff --git a/docs/developer/manifests.md b/docs/developer/manifests.md index 02eab466539c..4af64950ecd9 100644 --- a/docs/developer/manifests.md +++ b/docs/developer/manifests.md @@ -21,8 +21,8 @@ Each CLI is built with a particular EKS-A semver in its metadata. This pins each Dev releases are a bit special: we generate new them all the time, very fast. For this reason, we don't use a simple major.minor.patch semver, but we include build metadata. In particular we use `v{major}.{minor}.{patch}-dev+build.{number}` with `number` being a monotonically increasing integer that is bumped every time a new dev release is built. The version we use for the first part depends on the HEAD: `main` vs release branches: -- For `main`, we use the next minor version to the latest tag available. For example, if the latest prod release is `v0.18.5`, the version used for dev releases will be `v0.19.0-dev+build.{number}`. This aligns with the fact that the code in `main` belongs to the next future prod release `v0.19.0`. -- For `release-*` branches, we use the next patch version to the latest available tag for that minor version. For example, for `release-0.17`, if the latest latest prod release is for v0.17 is `v0.17.7`, dev releases will follow `v0.17.8-dev+build.{number}`. +- For `main`, we use the next minor version to the latest tag available. For example, if the latest prod release is `v0.21.3`, the version used for dev releases will be `v0.22.0-dev+build.{number}`. This aligns with the fact that the code in `main` belongs to the next future prod release `v0.22.0`. +- For `release-*` branches, we use the next patch version to the latest available tag for that minor version. For example, for `release-0.21`, if the latest prod release for v0.21 is `v0.21.5`, dev releases will follow `v0.21.6-dev+build.{number}`. In order to avoid the dev Release manifest growing forever, we trim the included releases to a max size, dropping always the oldest one. Take this in mind if using a particular version locally. If you do it for too long, it might become unavailable. If it does, just rebuild your CLI. @@ -32,6 +32,6 @@ When a CLI is built for dev E2E tests, it's given the latest available EKS-A dev ### Locally building the CLI When writing and testing code for the CLI/Controller, most of the time we don't care about particular releases and we just want to use the latest available Bundles that contains the latest available set of components. this verifies that our changes are compatible with the current state of EKS-A dependencies. -To avoid having to rebuild the CLI every time we want to refresh the pulled Bundles or even having to care about fetching the latest version, we introduced a special build metadata identifier `+latest`. This instructs the CLI to not look for an exact match with an EKS-A version, but select the newest one that matches our pre-release. For example: if the release manifest has two releases [`v0.19.0-dev+build.1234`, `v0.19.0-dev+build.1233`], then if the CLI has version `v0.19.0-dev+latest`, then the release `v0.19.0-dev+build.1234` will be selected. +To avoid having to rebuild the CLI every time we want to refresh the pulled Bundles or even having to care about fetching the latest version, we introduced a special build metadata identifier `+latest`. This instructs the CLI to not look for an exact match with an EKS-A version, but select the newest one that matches our pre-release. For example: if the release manifest has two releases [`v0.22.0-dev+build.1234`, `v0.22.0-dev+build.1233`], then if the CLI has version `v0.22.0-dev+latest`, then the release `v0.22.0-dev+build.1234` will be selected. This is the default behavior when building a CLI locally: the Makefile will calculate the appropriate major.minor.patch based on the current HEAD and its closest branch ancestor (either `main` or a `release-*` branch). If you wish to pin your local CLI to a particular version, pass the `DEV_GIT_VERSION` to the make target. \ No newline at end of file