diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 98a488626a25..8087151a5160 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,6 +1,9 @@ --- -name: Bug report -about: Tell us about a problem you are experiencing +name: 🐛 Bug report +about: Tell us about a problem you are experiencing. +title: '' +labels: '' +assignees: '' --- @@ -24,4 +27,3 @@ about: Tell us about a problem you are experiencing /kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels] - diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 60aafc2a8716..4404765eee2a 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -1,6 +1,9 @@ --- -name: Feature request -about: Suggest an idea for this project +name: ✨ Feature request +about: Suggest an idea for this project. +title: '' +labels: '' +assignees: '' --- diff --git a/.github/ISSUE_TEMPLATE/kubernetes_bump.md b/.github/ISSUE_TEMPLATE/kubernetes_bump.md new file mode 100644 index 000000000000..e5c67eecc022 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/kubernetes_bump.md @@ -0,0 +1,68 @@ +--- +name: 🚀 Kubernetes bump +about: "[Only for release team lead] Create an issue to track tasks to support a new Kubernetes minor release." +title: Tasks to bump to Kubernetes v1. +labels: '' +assignees: '' + +--- + +This issue is tracking the tasks that should be implemented **after** the Kubernetes minor release has been released. + +## Tasks + +Prerequisites: +* [ ] Decide which Cluster API release series will support the new Kubernetes version + * If feasible we usually cherry-pick the changes back to the latest release series. + +### Supporting managing and running on the new Kubernetes version + +This section contains tasks to update our book, e2e testing and CI to use and test the new Kubernetes version +as well as changes to Cluster API that we might have to make to support the new Kubernetes version. All of these +changes should be cherry-picked to all release series that will support the new Kubernetes version. + +* [ ] Modify quickstart and CAPD to use the new Kubernetes release: + * Bump the Kubernetes version in: + * `test/*`: search for occurrences of the previous Kubernetes version + * `Tiltfile` + * Ensure the latest available kind version is used as well. + * Verify the quickstart manually + * Prior art: #7156 +* [ ] Job configurations: + * For all releases which will support the new Kubernetes version: + * Update `INIT_WITH_KUBERNETES_VERSION`. + * Add new periodic upgrade jobs . + * Adjust presubmit jobs so that we have the latest upgrade jobs available on PRs. + * Prior art: https://github.com/kubernetes/test-infra/pull/27421 +* [ ] Update book: + * Update supported versions in `versions.md` + * Update job documentation in `jobs.md` + * Prior art: #7194 #7196 +* [ ] Issues specific to the Kubernetes minor release: + * Sometimes there are adjustments that we have to make in Cluster API to be able to support + a new Kubernetes minor version. Please add these issues here when they are identified. + +### Using new Kubernetes dependencies + +This section contains tasks to update Cluster API to use the latest Kubernetes Go dependencies and related topics +like using the right Go version and build images. These changes are only made on the main branch. We don't +need them in older releases as they are not necessary to manage workload clusters of the new Kubernetes version or +run the Cluster API controllers on the new Kubernetes version. + +* [ ] Ensure there is a new controller-runtime minor release which uses the new Kubernetes Go dependencies. +* [ ] Update our Prow jobs for the `main` branch to use the correct `kubekins-e2e` image + * It is recommended to have one PR for presubmit and one for periodic jobs to reduce the risk of breaking the periodic jobs. + * Prior art: presubmit jobs: https://github.com/kubernetes/test-infra/pull/27311 + * Prior art: periodic jobs: https://github.com/kubernetes/test-infra/pull/27311 +* [ ] Bump the Go version in Cluster API: (if Kubernetes is using a new Go minor version) + * Search for the currently used Go version across the repository and update it + * We have to at least modify it in: `.github/workflows`, `hack/ensure-go.sh`, `.golangci.yml`, `cloudbuild*.yaml`, `go.mod`, `Makefile`, `netlify.toml`, `Tiltfile` + * Prior art: #7135 +* [ ] Bump controller-runtime +* [ ] Bump controller-tools +* [ ] Bump the Kubernetes version used in integration tests via `KUBEBUILDER_ENVTEST_KUBERNETES_VERSION` in `Makefile` + * **Note**: This PR should be cherry-picked as well. It is part of this section as it depends on kubebuilder/controller-runtime + releases and is not strictly necessary for [Supporting managing and running on the new Kubernetes version](#supporting-managing-and-running-on-the-new-kubernetes-version). + * Prior art: #7193 +* [ ] Bump conversion-gen via `CONVERSION_GEN_VER` in `Makefile` + * Prior art: #7118 diff --git a/.github/ISSUE_TEMPLATE/release_tracking.md b/.github/ISSUE_TEMPLATE/release_tracking.md new file mode 100644 index 000000000000..83e5b8053c22 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/release_tracking.md @@ -0,0 +1,81 @@ +--- +name: 🚋 Release cycle tracking +about: Create a new release cycle tracking issue for a Cluster API minor release +about: "[Only for release team lead] Create an issue to track tasks for a Cluster API minor release." +title: Tasks for v release cycle +labels: '' +assignees: '' + +--- + +Please see the corresponding section in [release-tasks.md](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md) for documentation of individual tasks. + +## Tasks + +**Notes**: +* Weeks are only specified to give some orientation. +* The following is based on the v1.4 release cycle. Modify according to the tracked release cycle. + +Week -3 to 1: +* [ ] [Release Lead] [Set a tentative release date for the minor release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#set-a-tentative-release-date-for-the-minor-release) +* [ ] [Release Lead] [Assemble release team](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#assemble-release-team) + +Week 1: +* [ ] [Release Lead] [Finalize release schedule and team](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#finalize-release-schedule-and-team) +* [ ] [Release Lead] [Prepare main branch for development of the new release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#prepare-main-branch-for-development-of-the-new-release) +* [ ] [Communications Manager] [Add docs to collect release notes for users and migration notes for provider implementers](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#add-docs-to-collect-release-notes-for-users-and-migration-notes-for-provider-implementers) +* [ ] [Communications Manager] [Update supported versions](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#update-supported-versions) + +Week 1 to 4: +* [ ] [Release Lead] [Track] [Remove previously deprecated code](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#track-remove-previously-deprecated-code) + +Week 6: +* [ ] [Release Lead] [Cut the v1.3.1 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) + +Week 9: +* [ ] [Release Lead] [Cut the v1.3.2 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) + +Week 11 to 12: +* [ ] [Release Lead] [Track] [Bump dependencies](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#track-bump-dependencies) + +Week 13: +* [ ] [Release Lead] [Cut the v1.4.0-beta.0 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [Release Lead] [Cut the v1.3.3 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [Release Lead] [Create a new GitHub milestone for the next release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#create-a-new-github-milestone-for-the-next-release) + +Week 14: +* [ ] [Release Lead] [Cut the v1.4.0-beta.1 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [Release Lead] Select release lead for the next release cycle + +Week 15: +* [ ] [Release Lead] [Create the release-1.4 release branch](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#create-a-release-branch) +* [ ] [Release Lead] [Cut the v1.4.0-rc.0 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [CI Manager] [Setup jobs and dashboards for the release-1.4 release branch](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#setup-jobs-and-dashboards-for-a-new-release-branch) +* [ ] [Communications Manager] [Ensure the book for the new release is available](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#ensure-the-book-for-the-new-release-is-available) + +Week 15 to 17: +* [ ] [Communications Manager] [Polish release notes](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#polish-release-notes) + +Week 16: +* [ ] [Release Lead] [Cut the v1.4.0-rc.1 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) + +Week 17: +* [ ] [Release Lead] [Cut the v1.4.0 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [Release Lead] [Cut the v1.3.4 release](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#repeatedly-cut-a-release) +* [ ] [Release Lead] Organize release retrospective +* [ ] [Communications Manager] [Change production branch in Netlify to the new release branch](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#change-production-branch-in-netlify-to-the-new-release-branch) +* [ ] [Communications Manager] [Update clusterctl links in the quickstart](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#update-clusterctl-links-in-the-quickstart) + +Continuously: +* [Release lead] [Maintain the GitHub release milestone](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#continuously-maintain-the-github-release-milestone) +* [Communications Manager] [Communicate key dates to the community](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#continuously-communicate-key-dates-to-the-community) +* [Communications Manager] Improve release process documentation +* [Communications Manager] Maintain and improve user facing documentation about releases, release policy and release calendar +* [CI Manager] [Monitor CI signal](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#continuously-monitor-ci-signal) +* [CI Manager] [Reduce the amount of flaky tests](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#continuously-reduce-the-amount-of-flaky-tests) +* [CI Manager] [Bug triage](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#continuously-bug-triage) +* [CI Manager] Maintain and improve release automation, tooling & related developer docs + +If and when necessary: +* [ ] [Release Lead] [Track] [Bump the Cluster API apiVersion](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#optional-track-bump-the-cluster-api-apiversion) +* [ ] [Release Lead] [Track] [Bump the Kubernetes version](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-tasks.md#optional-track-bump-the-kubernetes-version) diff --git a/.github/workflows/dependabot.yml b/.github/workflows/dependabot.yml index f8304b9889f0..0eb46dde0c1a 100644 --- a/.github/workflows/dependabot.yml +++ b/.github/workflows/dependabot.yml @@ -18,13 +18,13 @@ jobs: runs-on: ubuntu-latest steps: - name: Set up Go 1.x - uses: actions/setup-go@c4a742cab115ed795e34d4513e2cf7d472deb55f # tag=v3.3.1 + uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # tag=v3.4.0 with: go-version: '1.19' id: go - name: Check out code into the Go module directory - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 - - uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7 # tag=v3.0.11 + uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # tag=v3.2.0 + - uses: actions/cache@c1a5de879eb890d062a85ee0252d6036480b1fe2 # tag=v3.2.1 name: Restore go cache with: path: | diff --git a/.github/workflows/golangci-lint.yml b/.github/workflows/golangci-lint.yml index 1a57ba954276..2c296e030aff 100644 --- a/.github/workflows/golangci-lint.yml +++ b/.github/workflows/golangci-lint.yml @@ -18,8 +18,8 @@ jobs: - test - hack/tools steps: - - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 - - uses: actions/setup-go@c4a742cab115ed795e34d4513e2cf7d472deb55f # tag=v3.3.1 + - uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # tag=v3.2.0 + - uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # tag=v3.4.0 with: go-version: 1.19 - name: golangci-lint diff --git a/.github/workflows/lint-docs-pr.yaml b/.github/workflows/lint-docs-pr.yaml index 3d90027824e9..bb540ce75f61 100644 --- a/.github/workflows/lint-docs-pr.yaml +++ b/.github/workflows/lint-docs-pr.yaml @@ -14,7 +14,7 @@ jobs: name: Broken Links runs-on: ubuntu-latest steps: - - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 + - uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # tag=v3.2.0 - uses: gaurav-nelson/github-action-markdown-link-check@5c5dfc0ac2e225883c0e5f03a85311ec2830d368 # tag=v1 with: use-quiet-mode: 'yes' diff --git a/.github/workflows/lint-docs-weekly.yml b/.github/workflows/lint-docs-weekly.yml index 2fe3a55a6fa2..3b35199b9099 100644 --- a/.github/workflows/lint-docs-weekly.yml +++ b/.github/workflows/lint-docs-weekly.yml @@ -12,7 +12,7 @@ jobs: name: Broken Links runs-on: ubuntu-latest steps: - - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 + - uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # tag=v3.2.0 - uses: gaurav-nelson/github-action-markdown-link-check@5c5dfc0ac2e225883c0e5f03a85311ec2830d368 # tag=v1 with: use-quiet-mode: 'yes' diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index d43674edbb95..3777adf63a07 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -17,11 +17,11 @@ jobs: - name: Set env run: echo "RELEASE_TAG=${GITHUB_REF:10}" >> $GITHUB_ENV - name: checkout code - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 + uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b # tag=v3.2.0 with: fetch-depth: 0 - name: Install go - uses: actions/setup-go@c4a742cab115ed795e34d4513e2cf7d472deb55f # tag=v3.3.1 + uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # tag=v3.4.0 with: go-version: '^1.19' - name: generate release artifacts @@ -31,7 +31,7 @@ jobs: run: | make release-notes - name: Release - uses: softprops/action-gh-release@1e07f4398721186383de40550babbdf2b84acfc5 # tag=v1 + uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # tag=v1 with: draft: true files: out/* diff --git a/.github/workflows/scan.yml b/.github/workflows/scan.yml new file mode 100644 index 000000000000..648f337b1f29 --- /dev/null +++ b/.github/workflows/scan.yml @@ -0,0 +1,22 @@ +name: scan-images + +on: + schedule: + - cron: "0 12 * * 1" + +# Remove all permissions from GITHUB_TOKEN except metadata. +permissions: {} + +jobs: + scan: + name: Trivy + runs-on: ubuntu-latest + steps: + - name: Check out code + uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # tag=v3.1.0 + - name: Setup go + uses: actions/setup-go@d0a58c1c4d2b25278816e339b944508c875f3613 # tag=v3.4.0 + with: + go-version: 1.19 + - name: Run verify container script + run: make verify-container-images diff --git a/.github/workflows/spectro-dev-build.yaml b/.github/workflows/spectro-dev-build.yaml new file mode 100644 index 000000000000..a5b77b0ca55a --- /dev/null +++ b/.github/workflows/spectro-dev-build.yaml @@ -0,0 +1,60 @@ +name: Spectro Release +run-name: Release for Cluster API ${{ github.event.inputs.release_version }} +on: + workflow_dispatch: + inputs: + release_version: + description: 'Cluster API Version to Build' + required: true + default: '0.0.0' +jobs: + builder: + # edge-runner machine group is a bunch of machines in US Datacenter + runs-on: ubuntu-latest + # Initialize all secrets required for the job + # Ensure that the credentials are provided as encrypted secrets + env: + SPECTRO_VERSION: ${{ github.event.inputs.release_version }} + steps: + - + uses: mukunku/tag-exists-action@v1.2.0 + id: checkTag + with: + tag: v${{ github.event.inputs.release_version }}-spectro + - + if: ${{ steps.checkTag.outputs.exists == 'true' }} + run: | + echo "Tag already exists for v${{ github.event.inputs.release_version }}-spectro..." + exit 1 + - + uses: actions/checkout@v3 + - + name: Set up Docker Buildx + uses: docker/setup-buildx-action@v1 + - + name: Login to private registry + uses: docker/login-action@v1 + with: + registry: ${{ secrets.REGISTRY_URL }} + username: ${{ secrets.REGISTRY_USERNAME }} + password: ${{ secrets.REGISTRY_PASSWORD }} + - + name: Build Image + env: + REGISTRY: gcr.io/spectro-images-public/release/cluster-api + run: | + make docker-build-all + make docker-push-all + - + name: Create Release + id: create_release + uses: actions/create-release@v1 + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + with: + tag_name: v${{ github.event.inputs.release_version }}-spectro + release_name: Release v${{ github.event.inputs.release_version }}-spectro + body: | + Release version v${{ github.event.inputs.release_version }}-spectro + draft: false + prerelease: false diff --git a/.github/workflows/spectro-release.yaml b/.github/workflows/spectro-release.yaml new file mode 100644 index 000000000000..340fc1242125 --- /dev/null +++ b/.github/workflows/spectro-release.yaml @@ -0,0 +1,68 @@ +name: Spectro Release +run-name: Release for Cluster API ${{ github.event.inputs.release_version }} +on: + workflow_dispatch: + inputs: + release_version: + description: 'Cluster API Version to Build' + required: true + default: '0.0.0' +jobs: + builder: + # edge-runner machine group is a bunch of machines in US Datacenter + runs-on: ubuntu-latest + # Initialize all secrets required for the job + # Ensure that the credentials are provided as encrypted secrets + env: + SPECTRO_VERSION: ${{ github.event.inputs.release_version }} + steps: + - + uses: mukunku/tag-exists-action@v1.2.0 + id: checkTag + with: + tag: v${{ github.event.inputs.release_version }}-spectro + - + if: ${{ steps.checkTag.outputs.exists == 'true' }} + run: | + echo "Tag already exists for v${{ github.event.inputs.release_version }}-spectro..." + exit 1 + - + uses: actions/checkout@v3 + - + name: Set up Docker Buildx + uses: docker/setup-buildx-action@v1 + - + name: Login to private registry + uses: docker/login-action@v1 + with: + registry: ${{ secrets.REGISTRY_URL }} + username: ${{ secrets.REGISTRY_USERNAME }} + password: ${{ secrets.REGISTRY_PASSWORD }} + - + name: Build Image + env: + REGISTRY: gcr.io/spectro-images-public/release/cluster-api + run: | + make docker-build-all + make docker-push-all + - + name: Build Image - FIPS Mode + env: + FIPS_ENABLE: yes + REGISTRY: gcr.io/spectro-images-public/release-fips/cluster-api + run: | + make docker-build-all + make docker-push-all + - + name: Create Release + id: create_release + uses: actions/create-release@v1 + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + with: + tag_name: v${{ github.event.inputs.release_version }}-spectro + release_name: Release v${{ github.event.inputs.release_version }}-spectro + body: | + Release version v${{ github.event.inputs.release_version }}-spectro + draft: false + prerelease: false diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5187a41feccf..fc89821b5cfe 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -51,7 +51,7 @@ come up, including gaps in documentation! If you're a more experienced contributor, looking at unassigned issues in the next release milestone is a good way to find work that has been prioritized. For example, if the latest minor release is `v1.0`, the next release milestone is `v1.1`. -Help and contributions are very welcome in the form of code contributions but also in helping to moderate office hours, triaging issues, fixing/investigating flaky tests, being part of the [release team](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/developer/release-team.md), helping new contributors with their questions, reviewing proposals, etc. +Help and contributions are very welcome in the form of code contributions but also in helping to moderate office hours, triaging issues, fixing/investigating flaky tests, being part of the [release team](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-team.md), helping new contributors with their questions, reviewing proposals, etc. ## Versioning @@ -116,19 +116,33 @@ this should generally not be the case. ### Support and guarantees -Cluster API maintains the most recent release branch for all supported API and contract versions. Support for this section refers to the ability to backport and release patch versions. - -| API Version | Branch | Supported Until | -| ------------- |-------------|-----------------| -| **v1beta1** | release-1.2 | current stable | -| **v1beta1** | release-1.1 | 2022-09-15 | -| **v1beta1** | release-1.0 | 2022-02-02 | -| **v1alpha4** | release-0.4 | 2022-04-06 | -| **v1alpha3** | release-0.3 | 2022-02-23 | +Cluster API maintains the most recent release/releases for all supported API and contract versions. Support for this section refers to the ability to backport and release patch versions; +[backport policy](#backporting-a-patch) is defined above. - The API version is determined from the GroupVersion defined in the top-level `api/` package. -- The EOL date is determined from the last release available once a new API version is published. -- For each given API version only the most recent associated release branch is supported, older branches are immediately unsupported. Exceptions can be filed with maintainers and taken into consideration on a case-by-case basis. +- The EOL date of each API Version is determined from the last release available once a new API version is published. + +| API Version | Supported Until | +|--------------|----------------------| +| **v1beta1** | TBD (current stable) | +| **v1alpha4** | EOL since 2022-04-06 | +| **v1alpha3** | EOL since 2022-02-23 | + +- For the latest API version we support the two most recent minor releases; older minor releases are immediately unsupported when a new major/minor release is available. +- For older API versions we only support the most recent minor release until the API version reaches EOL. + +| Minor Release | API Version | Supported Until | +|---------------|--------------|------------------------------------------------------| +| v1.3.x | **v1beta1** | when v1.5.0 will be released | +| v1.2.x | **v1beta1** | when v1.4.0 will be released, tentatively March 2023 | +| v1.1.x | **v1beta1** | EOL since 2022-07-18 - v1.2.0 release date (*) | +| v1.0.x | **v1beta1** | EOL since 2022-02-02 - v1.1.0 release date (*) | +| v0.4.x | **v1alpha4** | EOL since 2022-04-06 - API version EOL | +| v0.3.x | **v1alpha3** | EOL since 2022-02-23 - API version EOL | + +(*) Previous support policy applies, older minor releases were immediately unsupported when a new major/minor release was available + +- Exceptions can be filed with maintainers and taken into consideration on a case-by-case basis. ## Contributing a Patch diff --git a/Dockerfile b/Dockerfile index 118d2f9fb921..42a18048ecda 100644 --- a/Dockerfile +++ b/Dockerfile @@ -28,11 +28,19 @@ ARG ARCH FROM ${builder_image} as builder WORKDIR /workspace +RUN apk update +RUN apk add git gcc g++ curl + # Run this with docker build --build-arg goproxy=$(go env GOPROXY) to override the goproxy ARG goproxy=https://proxy.golang.org # Run this with docker build --build-arg package=./controlplane/kubeadm or --build-arg package=./bootstrap/kubeadm ENV GOPROXY=$goproxy +# FIPS +ARG CRYPTO_LIB +ENV GOEXPERIMENT=${CRYPTO_LIB:+boringcrypto} + + # Copy the Go Modules manifests COPY go.mod go.mod COPY go.sum go.sum @@ -56,11 +64,19 @@ ARG ARCH ARG ldflags # Do not force rebuild of up-to-date packages (do not use -a) and use the compiler cache folder -RUN --mount=type=cache,target=/root/.cache/go-build \ +RUN --mount=type=cache,target=/root/.cache/go-build \ --mount=type=cache,target=/go/pkg/mod \ + if [ ${CRYPTO_LIB} ]; \ + then \ + CGO_ENABLED=1 GOOS=linux GOARCH=${ARCH} \ + go build -trimpath -ldflags "${ldflags} -linkmode=external -extldflags '-static'" \ + -o manager ${package};\ + else \ CGO_ENABLED=0 GOOS=linux GOARCH=${ARCH} \ go build -trimpath -ldflags "${ldflags} -extldflags '-static'" \ - -o manager ${package} + -o manager ${package};\ + fi + # Production image FROM gcr.io/distroless/static:nonroot-${ARCH} diff --git a/Makefile b/Makefile index 9db1137afe96..69edbbfd6256 100644 --- a/Makefile +++ b/Makefile @@ -23,8 +23,9 @@ SHELL:=/usr/bin/env bash # # Go. # -GO_VERSION ?= 1.19.3 -GO_CONTAINER_IMAGE ?= docker.io/library/golang:$(GO_VERSION) +GO_VERSION ?= 1.19.8 +# GO_CONTAINER_IMAGE ?= docker.io/library/golang:$(GO_VERSION) +GO_CONTAINER_IMAGE ?= golang:1.19.10-alpine3.18 # Use GOPROXY environment variable if set GOPROXY := $(shell go env GOPROXY) @@ -39,7 +40,7 @@ export GO111MODULE=on # # Kubebuilder. # -export KUBEBUILDER_ENVTEST_KUBERNETES_VERSION ?= 1.25.0 +export KUBEBUILDER_ENVTEST_KUBERNETES_VERSION ?= 1.26.0 export KUBEBUILDER_CONTROLPLANE_START_TIMEOUT ?= 60s export KUBEBUILDER_CONTROLPLANE_STOP_TIMEOUT ?= 60s @@ -138,6 +139,8 @@ GO_APIDIFF_PKG := github.com/joelanford/go-apidiff HADOLINT_VER := v2.10.0 HADOLINT_FAILURE_THRESHOLD = warning +SHELLCHECK_VER := v0.9.0 + KPROMO_VER := v3.4.5 KPROMO_BIN := kpromo KPROMO := $(abspath $(TOOLS_BIN_DIR)/$(KPROMO_BIN)-$(KPROMO_VER)) @@ -148,7 +151,7 @@ YQ_BIN := yq YQ := $(abspath $(TOOLS_BIN_DIR)/$(YQ_BIN)-$(YQ_VER)) YQ_PKG := github.com/mikefarah/yq/v4 -GINGKO_VER := v2.4.0 +GINGKO_VER := v2.5.0 GINKGO_BIN := ginkgo GINKGO := $(abspath $(TOOLS_BIN_DIR)/$(GINKGO_BIN)-$(GINGKO_VER)) GINKGO_PKG := github.com/onsi/ginkgo/v2/ginkgo @@ -172,8 +175,24 @@ TILT_PREPARE := $(abspath $(TOOLS_BIN_DIR)/$(TILT_PREPARE_BIN)) GOLANGCI_LINT_BIN := golangci-lint GOLANGCI_LINT := $(abspath $(TOOLS_BIN_DIR)/$(GOLANGCI_LINT_BIN)) +# It is set by Prow GIT_TAG, a git-based tag of the form vYYYYMMDD-hash, e.g., v20210120-v0.3.10-308-gc61521971 +# Fips Flags +FIPS_ENABLE ?= "" + +RELEASE_LOC := release +ifeq ($(FIPS_ENABLE),yes) + RELEASE_LOC := release-fips +endif + +SPECTRO_VERSION ?= 4.0.0-dev +TAG ?= v1.3.2-spectro-${SPECTRO_VERSION} +ARCH ?= amd64 +# ALL_ARCH = amd64 arm arm64 ppc64le s390x +ALL_ARCH = amd64 arm64 + +REGISTRY ?= gcr.io/spectro-dev-public/$(USER)/${RELEASE_LOC} + # Define Docker related variables. Releases should modify and double check these vars. -REGISTRY ?= gcr.io/$(shell gcloud config get-value project) PROD_REGISTRY ?= registry.k8s.io/cluster-api STAGING_REGISTRY ?= gcr.io/k8s-staging-cluster-api @@ -207,12 +226,6 @@ TEST_EXTENSION_IMG ?= $(REGISTRY)/$(TEST_EXTENSION_IMAGE_NAME) # kind CAPI_KIND_CLUSTER_NAME ?= capi-test -# It is set by Prow GIT_TAG, a git-based tag of the form vYYYYMMDD-hash, e.g., v20210120-v0.3.10-308-gc61521971 - -TAG ?= dev -ARCH ?= $(shell go env GOARCH) -ALL_ARCH = amd64 arm arm64 ppc64le s390x - # Allow overriding the imagePullPolicy PULL_POLICY ?= Always @@ -608,12 +621,16 @@ verify-boilerplate: ## Verify boilerplate text exists in each file .PHONY: verify-shellcheck verify-shellcheck: ## Verify shell files - TRACE=$(TRACE) ./hack/verify-shellcheck.sh + TRACE=$(TRACE) ./hack/verify-shellcheck.sh $(SHELLCHECK_VER) .PHONY: verify-tiltfile verify-tiltfile: ## Verify Tiltfile format TRACE=$(TRACE) ./hack/verify-starlark.sh +.PHONY: verify-container-images +verify-container-images: ## Verify container images + TRACE=$(TRACE) ./hack/verify-container-images.sh + ## -------------------------------------- ## Binaries ## -------------------------------------- @@ -657,7 +674,8 @@ docker-build-all: $(addprefix docker-build-,$(ALL_ARCH)) ## Build docker images docker-build-%: $(MAKE) ARCH=$* docker-build -ALL_DOCKER_BUILD = core kubeadm-bootstrap kubeadm-control-plane docker-infrastructure test-extension clusterctl +# ALL_DOCKER_BUILD = core kubeadm-bootstrap kubeadm-control-plane docker-infrastructure test-extension clusterctl +ALL_DOCKER_BUILD = core kubeadm-bootstrap kubeadm-control-plane clusterctl .PHONY: docker-build docker-build: docker-pull-prerequisites ## Run docker-build-* targets for all the images @@ -673,35 +691,35 @@ docker-build-e2e: ## Run docker-build-* targets for all the images with settings .PHONY: docker-build-core docker-build-core: ## Build the docker image for core controller manager - DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" . -t $(CONTROLLER_IMG)-$(ARCH):$(TAG) + DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" . -t $(CONTROLLER_IMG)-$(ARCH):$(TAG) $(MAKE) set-manifest-image MANIFEST_IMG=$(CONTROLLER_IMG)-$(ARCH) MANIFEST_TAG=$(TAG) TARGET_RESOURCE="./config/default/manager_image_patch.yaml" $(MAKE) set-manifest-pull-policy TARGET_RESOURCE="./config/default/manager_pull_policy.yaml" .PHONY: docker-build-kubeadm-bootstrap docker-build-kubeadm-bootstrap: ## Build the docker image for kubeadm bootstrap controller manager - DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./bootstrap/kubeadm --build-arg ldflags="$(LDFLAGS)" . -t $(KUBEADM_BOOTSTRAP_CONTROLLER_IMG)-$(ARCH):$(TAG) + DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./bootstrap/kubeadm --build-arg ldflags="$(LDFLAGS)" . -t $(KUBEADM_BOOTSTRAP_CONTROLLER_IMG)-$(ARCH):$(TAG) $(MAKE) set-manifest-image MANIFEST_IMG=$(KUBEADM_BOOTSTRAP_CONTROLLER_IMG)-$(ARCH) MANIFEST_TAG=$(TAG) TARGET_RESOURCE="./bootstrap/kubeadm/config/default/manager_image_patch.yaml" $(MAKE) set-manifest-pull-policy TARGET_RESOURCE="./bootstrap/kubeadm/config/default/manager_pull_policy.yaml" .PHONY: docker-build-kubeadm-control-plane docker-build-kubeadm-control-plane: ## Build the docker image for kubeadm control plane controller manager - DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./controlplane/kubeadm --build-arg ldflags="$(LDFLAGS)" . -t $(KUBEADM_CONTROL_PLANE_CONTROLLER_IMG)-$(ARCH):$(TAG) + DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./controlplane/kubeadm --build-arg ldflags="$(LDFLAGS)" . -t $(KUBEADM_CONTROL_PLANE_CONTROLLER_IMG)-$(ARCH):$(TAG) $(MAKE) set-manifest-image MANIFEST_IMG=$(KUBEADM_CONTROL_PLANE_CONTROLLER_IMG)-$(ARCH) MANIFEST_TAG=$(TAG) TARGET_RESOURCE="./controlplane/kubeadm/config/default/manager_image_patch.yaml" $(MAKE) set-manifest-pull-policy TARGET_RESOURCE="./controlplane/kubeadm/config/default/manager_pull_policy.yaml" .PHONY: docker-build-docker-infrastructure docker-build-docker-infrastructure: ## Build the docker image for docker infrastructure controller manager - cd $(CAPD_DIR); DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" ../../.. -t $(CAPD_CONTROLLER_IMG)-$(ARCH):$(TAG) --file Dockerfile + cd $(CAPD_DIR); DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" ../../.. -t $(CAPD_CONTROLLER_IMG)-$(ARCH):$(TAG) --file Dockerfile $(MAKE) set-manifest-image MANIFEST_IMG=$(CAPD_CONTROLLER_IMG)-$(ARCH) MANIFEST_TAG=$(TAG) TARGET_RESOURCE="$(CAPD_DIR)/config/default/manager_image_patch.yaml" $(MAKE) set-manifest-pull-policy TARGET_RESOURCE="$(CAPD_DIR)/config/default/manager_pull_policy.yaml" .PHONY: docker-build-clusterctl docker-build-clusterctl: ## Build the docker image for clusterctl - DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./cmd/clusterctl --build-arg ldflags="$(LDFLAGS)" -f ./cmd/clusterctl/Dockerfile . -t $(CLUSTERCTL_IMG)-$(ARCH):$(TAG) + DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg package=./cmd/clusterctl --build-arg ldflags="$(LDFLAGS)" -f ./cmd/clusterctl/Dockerfile . -t $(CLUSTERCTL_IMG)-$(ARCH):$(TAG) .PHONY: docker-build-test-extension docker-build-test-extension: ## Build the docker image for core controller manager - DOCKER_BUILDKIT=1 docker build --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" . -t $(TEST_EXTENSION_IMG)-$(ARCH):$(TAG) --file ./test/extension/Dockerfile + DOCKER_BUILDKIT=1 docker build --build-arg CRYPTO_LIB=${FIPS_ENABLE} --build-arg builder_image=$(GO_CONTAINER_IMAGE) --build-arg goproxy=$(GOPROXY) --build-arg ARCH=$(ARCH) --build-arg ldflags="$(LDFLAGS)" . -t $(TEST_EXTENSION_IMG)-$(ARCH):$(TAG) --file ./test/extension/Dockerfile $(MAKE) set-manifest-image MANIFEST_IMG=$(TEST_EXTENSION_IMG)-$(ARCH) MANIFEST_TAG=$(TAG) TARGET_RESOURCE="./test/extension/config/default/manager_image_patch.yaml" $(MAKE) set-manifest-pull-policy TARGET_RESOURCE="./test/extension/config/default/manager_pull_policy.yaml" @@ -829,6 +847,11 @@ release: clean-release ## Build and push container images using the latest git t git checkout "${RELEASE_TAG}" # Build binaries first. GIT_VERSION=$(RELEASE_TAG) $(MAKE) release-binaries + # Set the manifest images to the staging/production bucket and Builds the manifests to publish with a release. + $(MAKE) release-manifests-all + +.PHONY: release-manifests-all +release-manifests-all: # Set the manifest images to the staging/production bucket and Builds the manifests to publish with a release. # Set the manifest image to the production bucket. $(MAKE) manifest-modification REGISTRY=$(PROD_REGISTRY) ## Build the manifests @@ -960,7 +983,7 @@ docker-push-all: $(addprefix docker-push-,$(ALL_ARCH)) ## Push the docker image $(MAKE) docker-push-manifest-core $(MAKE) docker-push-manifest-kubeadm-bootstrap $(MAKE) docker-push-manifest-kubeadm-control-plane - $(MAKE) docker-push-manifest-docker-infrastructure +# $(MAKE) docker-push-manifest-docker-infrastructure $(MAKE) docker-push-clusterctl docker-push-%: @@ -972,7 +995,7 @@ docker-push: ## Push the docker images to be included in the release docker push $(KUBEADM_BOOTSTRAP_CONTROLLER_IMG)-$(ARCH):$(TAG) docker push $(KUBEADM_CONTROL_PLANE_CONTROLLER_IMG)-$(ARCH):$(TAG) docker push $(CLUSTERCTL_IMG)-$(ARCH):$(TAG) - docker push $(CAPD_CONTROLLER_IMG)-$(ARCH):$(TAG) +# docker push $(CAPD_CONTROLLER_IMG)-$(ARCH):$(TAG) .PHONY: docker-push-manifest-core docker-push-manifest-core: ## Push the multiarch manifest for the core docker images diff --git a/Tiltfile b/Tiltfile index 0eb9cde41a73..08b204995ac7 100644 --- a/Tiltfile +++ b/Tiltfile @@ -3,7 +3,7 @@ envsubst_cmd = "./hack/tools/bin/envsubst" clusterctl_cmd = "./bin/clusterctl" kubectl_cmd = "kubectl" -kubernetes_version = "v1.25.0" +kubernetes_version = "v1.25.3" if str(local("command -v " + kubectl_cmd + " || true", quiet = True)) == "": fail("Required command '" + kubectl_cmd + "' not found in PATH") @@ -167,7 +167,7 @@ def load_provider_tiltfiles(): tilt_helper_dockerfile_header = """ # Tilt image -FROM golang:1.19.3 as tilt-helper +FROM golang:1.19.4 as tilt-helper # Support live reloading with Tilt RUN go install github.com/go-delve/delve/cmd/dlv@latest RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/tilt-dev/rerun-process-wrapper/master/restart.sh && \ diff --git a/api/v1beta1/common_types.go b/api/v1beta1/common_types.go index 218120b18375..113c4e95f25a 100644 --- a/api/v1beta1/common_types.go +++ b/api/v1beta1/common_types.go @@ -107,6 +107,8 @@ const ( MachineSkipRemediationAnnotation = "cluster.x-k8s.io/skip-remediation" // ClusterSecretType defines the type of secret created by core components. + // Note: This is used by core CAPI, CAPBK, and KCP to determine whether a secret is created by the controllers + // themselves or supplied by the user (e.g. bring your own certificates). ClusterSecretType corev1.SecretType = "cluster.x-k8s.io/secret" //nolint:gosec // InterruptibleLabel is the label used to mark the nodes that run on interruptible instances. @@ -128,6 +130,12 @@ const ( // any changes to the actual object because it is a dry run) and the topology controller // will receive the resulting object. TopologyDryRunAnnotation = "topology.cluster.x-k8s.io/dry-run" + + // ReplicasManagedByAnnotation is an annotation that indicates external (non-Cluster API) management of infra scaling. + // The practical effect of this is that the capi "replica" count should be passively derived from the number of observed infra machines, + // instead of being a source of truth for eventual consistency. + // This annotation can be used to inform MachinePool status during in-progress scaling scenarios. + ReplicasManagedByAnnotation = "cluster.x-k8s.io/replicas-managed-by" ) const ( diff --git a/api/v1beta1/machine_types.go b/api/v1beta1/machine_types.go index 7219c126cde6..f6596240bc0e 100644 --- a/api/v1beta1/machine_types.go +++ b/api/v1beta1/machine_types.go @@ -37,11 +37,16 @@ const ( ExcludeWaitForNodeVolumeDetachAnnotation = "machine.cluster.x-k8s.io/exclude-wait-for-node-volume-detach" // MachineSetLabelName is the label set on machines if they're controlled by MachineSet. + // Note: The value of this label may be a hash if the MachineSet name is longer than 63 characters. MachineSetLabelName = "cluster.x-k8s.io/set-name" // MachineDeploymentLabelName is the label set on machines if they're controlled by MachineDeployment. MachineDeploymentLabelName = "cluster.x-k8s.io/deployment-name" + // MachineControlPlaneNameLabel is the label set on machines if they're controlled by a ControlPlane. + // Note: The value of this label may be a hash if the control plane name is longer than 63 characters. + MachineControlPlaneNameLabel = "cluster.x-k8s.io/control-plane-name" + // PreDrainDeleteHookAnnotationPrefix annotation specifies the prefix we // search each annotation for during the pre-drain.delete lifecycle hook // to pause reconciliation of deletion. These hooks will prevent removal of diff --git a/api/v1beta1/machinedeployment_webhook.go b/api/v1beta1/machinedeployment_webhook.go index 65fa45f4858b..6a5d6646c8e0 100644 --- a/api/v1beta1/machinedeployment_webhook.go +++ b/api/v1beta1/machinedeployment_webhook.go @@ -25,6 +25,7 @@ import ( "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/validation" "k8s.io/apimachinery/pkg/util/validation/field" "k8s.io/utils/pointer" ctrl "sigs.k8s.io/controller-runtime" @@ -76,6 +77,19 @@ func (m *MachineDeployment) ValidateDelete() error { func (m *MachineDeployment) validate(old *MachineDeployment) error { var allErrs field.ErrorList + // The MachineDeployment name is used as a label value. This check ensures names which are not be valid label values are rejected. + if errs := validation.IsValidLabelValue(m.Name); len(errs) != 0 { + for _, err := range errs { + allErrs = append( + allErrs, + field.Invalid( + field.NewPath("metadata", "name"), + m.Name, + fmt.Sprintf("must be a valid label value: %s", err), + ), + ) + } + } specPath := field.NewPath("spec") selector, err := metav1.LabelSelectorAsSelector(&m.Spec.Selector) if err != nil { diff --git a/api/v1beta1/machinedeployment_webhook_test.go b/api/v1beta1/machinedeployment_webhook_test.go index 4ab87a471896..c7ccc4800cd5 100644 --- a/api/v1beta1/machinedeployment_webhook_test.go +++ b/api/v1beta1/machinedeployment_webhook_test.go @@ -67,14 +67,45 @@ func TestMachineDeploymentValidation(t *testing.T) { goodMaxSurgeInt := intstr.FromInt(1) goodMaxUnavailableInt := intstr.FromInt(0) - tests := []struct { name string + md MachineDeployment + mdName string selectors map[string]string labels map[string]string strategy MachineDeploymentStrategy expectErr bool }{ + { + name: "pass with name of under 63 characters", + mdName: "short-name", + expectErr: false, + }, + { + name: "pass with _, -, . characters in name", + mdName: "thisNameContains.A_Non-Alphanumeric", + expectErr: false, + }, + { + name: "error with name of more than 63 characters", + mdName: "thisNameIsReallyMuchLongerThanTheMaximumLengthOfSixtyThreeCharacters", + expectErr: true, + }, + { + name: "error when name starts with NonAlphanumeric character", + mdName: "-thisNameStartsWithANonAlphanumeric", + expectErr: true, + }, + { + name: "error when name ends with NonAlphanumeric character", + mdName: "thisNameEndsWithANonAlphanumeric.", + expectErr: true, + }, + { + name: "error when name contains invalid NonAlphanumeric character", + mdName: "thisNameContainsInvalid!@NonAlphanumerics", + expectErr: true, + }, { name: "should return error on mismatch", selectors: map[string]string{"foo": "bar"}, @@ -163,6 +194,9 @@ func TestMachineDeploymentValidation(t *testing.T) { t.Run(tt.name, func(t *testing.T) { g := NewWithT(t) md := &MachineDeployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: tt.mdName, + }, Spec: MachineDeploymentSpec{ Strategy: &tt.strategy, Selector: metav1.LabelSelector{ diff --git a/api/v1beta1/machineset_webhook.go b/api/v1beta1/machineset_webhook.go index 37ef1c1cdcf9..17f17ccd94e9 100644 --- a/api/v1beta1/machineset_webhook.go +++ b/api/v1beta1/machineset_webhook.go @@ -28,6 +28,7 @@ import ( ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/webhook" + capilabels "sigs.k8s.io/cluster-api/internal/labels" "sigs.k8s.io/cluster-api/util/version" ) @@ -64,8 +65,9 @@ func (m *MachineSet) Default() { } if len(m.Spec.Selector.MatchLabels) == 0 && len(m.Spec.Selector.MatchExpressions) == 0 { - m.Spec.Selector.MatchLabels[MachineSetLabelName] = m.Name - m.Spec.Template.Labels[MachineSetLabelName] = m.Name + // Note: MustFormatValue is used here as the value of this label will be a hash if the MachineSet name is longer than 63 characters. + m.Spec.Selector.MatchLabels[MachineSetLabelName] = capilabels.MustFormatValue(m.Name) + m.Spec.Template.Labels[MachineSetLabelName] = capilabels.MustFormatValue(m.Name) } if m.Spec.Template.Spec.Version != nil && !strings.HasPrefix(*m.Spec.Template.Spec.Version, "v") { diff --git a/bootstrap/kubeadm/api/v1beta1/kubeadm_types.go b/bootstrap/kubeadm/api/v1beta1/kubeadm_types.go index f8ea593ec0ac..dc2bb60876f9 100644 --- a/bootstrap/kubeadm/api/v1beta1/kubeadm_types.go +++ b/bootstrap/kubeadm/api/v1beta1/kubeadm_types.go @@ -125,9 +125,16 @@ type ClusterConfiguration struct { CertificatesDir string `json:"certificatesDir,omitempty"` // ImageRepository sets the container registry to pull images from. - // If empty, `registry.k8s.io` will be used by default; in case of kubernetes version is a CI build (kubernetes version starts with `ci/` or `ci-cross/`) - // `gcr.io/k8s-staging-ci-images` will be used as a default for control plane components and for kube-proxy, while `registry.k8s.io` - // will be used for all the other images. + // * If not set, the default registry of kubeadm will be used, i.e. + // * registry.k8s.io (new registry): >= v1.22.17, >= v1.23.15, >= v1.24.9, >= v1.25.0 + // * k8s.gcr.io (old registry): all older versions + // Please note that when imageRepository is not set we don't allow upgrades to + // versions >= v1.22.0 which use the old registry (k8s.gcr.io). Please use + // a newer patch version with the new registry instead (i.e. >= v1.22.17, + // >= v1.23.15, >= v1.24.9, >= v1.25.0). + // * If the version is a CI build (kubernetes version starts with `ci/` or `ci-cross/`) + // `gcr.io/k8s-staging-ci-images` will be used as a default for control plane components + // and for kube-proxy, while `registry.k8s.io` will be used for all the other images. // +optional ImageRepository string `json:"imageRepository,omitempty"` diff --git a/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigs.yaml b/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigs.yaml index 71c60711ac0a..1ae748175bdb 100644 --- a/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigs.yaml +++ b/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigs.yaml @@ -2243,13 +2243,19 @@ spec: description: FeatureGates enabled by the user. type: object imageRepository: - description: ImageRepository sets the container registry to pull - images from. If empty, `registry.k8s.io` will be used by default; - in case of kubernetes version is a CI build (kubernetes version - starts with `ci/` or `ci-cross/`) `gcr.io/k8s-staging-ci-images` - will be used as a default for control plane components and for - kube-proxy, while `registry.k8s.io` will be used for all the - other images. + description: 'ImageRepository sets the container registry to pull + images from. * If not set, the default registry of kubeadm will + be used, i.e. * registry.k8s.io (new registry): >= v1.22.17, + >= v1.23.15, >= v1.24.9, >= v1.25.0 * k8s.gcr.io (old registry): + all older versions Please note that when imageRepository is + not set we don''t allow upgrades to versions >= v1.22.0 which + use the old registry (k8s.gcr.io). Please use a newer patch + version with the new registry instead (i.e. >= v1.22.17, >= + v1.23.15, >= v1.24.9, >= v1.25.0). * If the version is a CI + build (kubernetes version starts with `ci/` or `ci-cross/`) + `gcr.io/k8s-staging-ci-images` will be used as a default for + control plane components and for kube-proxy, while `registry.k8s.io` + will be used for all the other images.' type: string kind: description: 'Kind is a string value representing the REST resource diff --git a/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigtemplates.yaml b/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigtemplates.yaml index 85493b712abd..b9b33cb1fac5 100644 --- a/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigtemplates.yaml +++ b/bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigtemplates.yaml @@ -2246,14 +2246,21 @@ spec: description: FeatureGates enabled by the user. type: object imageRepository: - description: ImageRepository sets the container registry - to pull images from. If empty, `registry.k8s.io` will - be used by default; in case of kubernetes version is - a CI build (kubernetes version starts with `ci/` or - `ci-cross/`) `gcr.io/k8s-staging-ci-images` will be - used as a default for control plane components and for - kube-proxy, while `registry.k8s.io` will be used for - all the other images. + description: 'ImageRepository sets the container registry + to pull images from. * If not set, the default registry + of kubeadm will be used, i.e. * registry.k8s.io (new + registry): >= v1.22.17, >= v1.23.15, >= v1.24.9, >= + v1.25.0 * k8s.gcr.io (old registry): all older versions + Please note that when imageRepository is not set we + don''t allow upgrades to versions >= v1.22.0 which use + the old registry (k8s.gcr.io). Please use a newer patch + version with the new registry instead (i.e. >= v1.22.17, + >= v1.23.15, >= v1.24.9, >= v1.25.0). * If the version + is a CI build (kubernetes version starts with `ci/` + or `ci-cross/`) `gcr.io/k8s-staging-ci-images` will + be used as a default for control plane components and + for kube-proxy, while `registry.k8s.io` will be used + for all the other images.' type: string kind: description: 'Kind is a string value representing the diff --git a/bootstrap/kubeadm/config/default/manager_image_patch.yaml b/bootstrap/kubeadm/config/default/manager_image_patch.yaml index 21edd7e47b4a..ed33de69c764 100644 --- a/bootstrap/kubeadm/config/default/manager_image_patch.yaml +++ b/bootstrap/kubeadm/config/default/manager_image_patch.yaml @@ -7,5 +7,5 @@ spec: template: spec: containers: - - image: gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller:main + - image: gcr.io/spectro-dev-public/devop2023/release-fips/kubeadm-bootstrap-controller-amd64:v1.3.2-spectro-4.0.0-dev name: manager diff --git a/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go b/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go index 8213ef04e4cc..d46246b453c6 100644 --- a/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go +++ b/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go @@ -241,7 +241,10 @@ func (r *KubeadmConfigReconciler) Reconcile(ctx context.Context, req ctrl.Reques } } }() - + // Ensure the bootstrap secret associated with this KubeadmConfig has the correct ownerReference. + if err := r.ensureBootstrapSecretOwnersRef(ctx, scope); err != nil { + return ctrl.Result{}, err + } switch { // Wait for the infrastructure to be ready. case !cluster.Status.InfrastructureReady: @@ -437,12 +440,21 @@ func (r *KubeadmConfigReconciler) handleClusterNotInitialized(ctx context.Contex } certificates := secret.NewCertificatesForInitialControlPlane(scope.Config.Spec.ClusterConfiguration) - err = certificates.LookupOrGenerate( - ctx, - r.Client, - util.ObjectKey(scope.Cluster), - *metav1.NewControllerRef(scope.Config, bootstrapv1.GroupVersion.WithKind("KubeadmConfig")), - ) + + // If the Cluster does not have a ControlPlane reference look up and generate the certificates. + // Otherwise rely on certificates generated by the ControlPlane controller. + // Note: A cluster does not have a ControlPlane reference when using standalone CP machines. + if scope.Cluster.Spec.ControlPlaneRef == nil { + err = certificates.LookupOrGenerate( + ctx, + r.Client, + util.ObjectKey(scope.Cluster), + *metav1.NewControllerRef(scope.Config, bootstrapv1.GroupVersion.WithKind("KubeadmConfig"))) + } else { + err = certificates.Lookup(ctx, + r.Client, + util.ObjectKey(scope.Cluster)) + } if err != nil { conditions.MarkFalse(scope.Config, bootstrapv1.CertificatesAvailableCondition, bootstrapv1.CertificatesGenerationFailedReason, clusterv1.ConditionSeverityWarning, err.Error()) return ctrl.Result{}, err @@ -1022,3 +1034,35 @@ func (r *KubeadmConfigReconciler) storeBootstrapData(ctx context.Context, scope conditions.MarkTrue(scope.Config, bootstrapv1.DataSecretAvailableCondition) return nil } + +// Ensure the bootstrap secret has the KubeadmConfig as a controller OwnerReference. +func (r *KubeadmConfigReconciler) ensureBootstrapSecretOwnersRef(ctx context.Context, scope *Scope) error { + secret := &corev1.Secret{} + err := r.Client.Get(ctx, client.ObjectKey{Namespace: scope.Config.Namespace, Name: scope.Config.Name}, secret) + if err != nil { + // If the secret has not been created yet return early. + if apierrors.IsNotFound(err) { + return nil + } + return errors.Wrapf(err, "failed to add KubeadmConfig %s as ownerReference to bootstrap Secret %s", scope.ConfigOwner.GetName(), secret.GetName()) + } + patchHelper, err := patch.NewHelper(secret, r.Client) + if err != nil { + return errors.Wrapf(err, "failed to add KubeadmConfig %s as ownerReference to bootstrap Secret %s", scope.ConfigOwner.GetName(), secret.GetName()) + } + if c := metav1.GetControllerOf(secret); c != nil && c.Kind != "KubeadmConfig" { + secret.OwnerReferences = util.RemoveOwnerRef(secret.OwnerReferences, *c) + } + secret.OwnerReferences = util.EnsureOwnerRef(secret.OwnerReferences, metav1.OwnerReference{ + APIVersion: bootstrapv1.GroupVersion.String(), + Kind: "KubeadmConfig", + UID: scope.Config.UID, + Name: scope.Config.Name, + Controller: pointer.Bool(true), + }) + err = patchHelper.Patch(ctx, secret) + if err != nil { + return errors.Wrapf(err, "could not add KubeadmConfig %s as ownerReference to bootstrap Secret %s", scope.ConfigOwner.GetName(), secret.GetName()) + } + return nil +} diff --git a/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller_test.go b/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller_test.go index 32f87080c5af..db6a11e70c0f 100644 --- a/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller_test.go +++ b/bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller_test.go @@ -117,6 +117,105 @@ func TestKubeadmConfigReconciler_Reconcile_ReturnEarlyIfKubeadmConfigIsReady(t * g.Expect(result.RequeueAfter).To(Equal(time.Duration(0))) } +// Reconcile returns early if the kubeadm config is ready because it should never re-generate bootstrap data. +func TestKubeadmConfigReconciler_TestSecretOwnerReferenceReconciliation(t *testing.T) { + g := NewWithT(t) + + clusterName := "my-cluster" + cluster := builder.Cluster(metav1.NamespaceDefault, clusterName).Build() + machine := builder.Machine(metav1.NamespaceDefault, "machine"). + WithVersion("v1.19.1"). + WithClusterName(clusterName). + WithBootstrapTemplate(bootstrapbuilder.KubeadmConfig(metav1.NamespaceDefault, "cfg").Unstructured()). + Build() + machine.Spec.Bootstrap.DataSecretName = pointer.String("something") + + config := newKubeadmConfig(metav1.NamespaceDefault, "cfg") + config.SetOwnerReferences(util.EnsureOwnerRef(config.GetOwnerReferences(), metav1.OwnerReference{ + APIVersion: machine.APIVersion, + Kind: machine.Kind, + Name: machine.Name, + UID: machine.UID, + })) + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: config.Name, + Namespace: config.Namespace, + }, + Type: corev1.SecretTypeBootstrapToken, + } + config.Status.Ready = true + + objects := []client.Object{ + config, + machine, + secret, + cluster, + } + myclient := fake.NewClientBuilder().WithObjects(objects...).Build() + + k := &KubeadmConfigReconciler{ + Client: myclient, + } + + request := ctrl.Request{ + NamespacedName: client.ObjectKey{ + Namespace: metav1.NamespaceDefault, + Name: "cfg", + }, + } + var err error + key := client.ObjectKeyFromObject(config) + actual := &corev1.Secret{} + + t.Run("KubeadmConfig ownerReference is added on first reconcile", func(t *testing.T) { + _, err = k.Reconcile(ctx, request) + g.Expect(err).NotTo(HaveOccurred()) + + g.Expect(myclient.Get(ctx, key, actual)).To(Succeed()) + + controllerOwner := metav1.GetControllerOf(actual) + g.Expect(controllerOwner).To(Not(BeNil())) + g.Expect(controllerOwner.Kind).To(Equal(config.Kind)) + g.Expect(controllerOwner.Name).To(Equal(config.Name)) + }) + + t.Run("KubeadmConfig ownerReference re-reconciled without error", func(t *testing.T) { + _, err = k.Reconcile(ctx, request) + g.Expect(err).NotTo(HaveOccurred()) + + g.Expect(myclient.Get(ctx, key, actual)).To(Succeed()) + + controllerOwner := metav1.GetControllerOf(actual) + g.Expect(controllerOwner).To(Not(BeNil())) + g.Expect(controllerOwner.Kind).To(Equal(config.Kind)) + g.Expect(controllerOwner.Name).To(Equal(config.Name)) + }) + t.Run("non-KubeadmConfig controller OwnerReference is replaced", func(t *testing.T) { + g.Expect(myclient.Get(ctx, key, actual)).To(Succeed()) + + actual.SetOwnerReferences([]metav1.OwnerReference{ + { + APIVersion: machine.APIVersion, + Kind: machine.Kind, + Name: machine.Name, + UID: machine.UID, + Controller: pointer.Bool(true), + }}) + g.Expect(myclient.Update(ctx, actual)).To(Succeed()) + + _, err = k.Reconcile(ctx, request) + g.Expect(err).NotTo(HaveOccurred()) + + g.Expect(myclient.Get(ctx, key, actual)).To(Succeed()) + + controllerOwner := metav1.GetControllerOf(actual) + g.Expect(controllerOwner).To(Not(BeNil())) + g.Expect(controllerOwner.Kind).To(Equal(config.Kind)) + g.Expect(controllerOwner.Name).To(Equal(config.Name)) + }) +} + // Reconcile returns nil if the referenced Machine cannot be found. func TestKubeadmConfigReconciler_Reconcile_ReturnNilIfReferencedMachineIsNotFound(t *testing.T) { g := NewWithT(t) diff --git a/bootstrap/kubeadm/internal/controllers/token.go b/bootstrap/kubeadm/internal/controllers/token.go index f1e509f9a2b1..7fc2be7586a3 100644 --- a/bootstrap/kubeadm/internal/controllers/token.go +++ b/bootstrap/kubeadm/internal/controllers/token.go @@ -22,6 +22,7 @@ import ( "github.com/pkg/errors" corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" bootstrapapi "k8s.io/cluster-bootstrap/token/api" bootstraputil "k8s.io/cluster-bootstrap/token/util" @@ -81,7 +82,7 @@ func getToken(ctx context.Context, c client.Client, token string) (*corev1.Secre } if secret.Data == nil { - return nil, errors.Errorf("Invalid bootstrap secret %q, remove the token from the kubadm config to re-create", secretName) + return nil, errors.Errorf("Invalid bootstrap secret %q, remove the token from the kubeadm config to re-create", secretName) } return secret, nil } @@ -101,6 +102,12 @@ func refreshToken(ctx context.Context, c client.Client, token string, ttl time.D func shouldRotate(ctx context.Context, c client.Client, token string, ttl time.Duration) (bool, error) { secret, err := getToken(ctx, c, token) if err != nil { + // If the secret is deleted before due to unknown reasons, machine pools cannot be scaled up. + // Since that, secret should be rotated if missing. + // Normally, it is not expected to reach this line. + if apierrors.IsNotFound(err) { + return true, nil + } return false, err } diff --git a/bootstrap/kubeadm/main.go b/bootstrap/kubeadm/main.go index 04f30bfefc9b..8a0f55afb059 100644 --- a/bootstrap/kubeadm/main.go +++ b/bootstrap/kubeadm/main.go @@ -128,7 +128,7 @@ func InitFlags(fs *pflag.FlagSet) { fs.StringVar(&watchFilterValue, "watch-filter", "", fmt.Sprintf("Label value that the controller watches to reconcile cluster-api objects. Label key is always %s. If unspecified, the controller watches for all cluster-api objects.", clusterv1.WatchLabel)) - fs.IntVar(&webhookPort, "webhook-port", 9443, + fs.IntVar(&webhookPort, "webhook-port", 0, "Webhook Server port") fs.StringVar(&webhookCertDir, "webhook-cert-dir", "/tmp/k8s-webhook-server/serving-certs/", @@ -217,6 +217,10 @@ func main() { } func setupChecks(mgr ctrl.Manager) { + if webhookPort == 0 { + setupLog.V(0).Info("webhook is disabled skipping webhook healthcheck setup") + return + } if err := mgr.AddReadyzCheck("webhook", mgr.GetWebhookServer().StartedChecker()); err != nil { setupLog.Error(err, "unable to create ready check") os.Exit(1) @@ -229,6 +233,10 @@ func setupChecks(mgr ctrl.Manager) { } func setupReconcilers(ctx context.Context, mgr ctrl.Manager) { + if webhookPort != 0 { + setupLog.V(0).Info("webhook is enabled skipping reconcilers setup") + return + } if err := (&kubeadmbootstrapcontrollers.KubeadmConfigReconciler{ Client: mgr.GetClient(), WatchFilterValue: watchFilterValue, @@ -240,6 +248,10 @@ func setupReconcilers(ctx context.Context, mgr ctrl.Manager) { } func setupWebhooks(mgr ctrl.Manager) { + if webhookPort == 0 { + setupLog.V(0).Info("webhook is disabled skipping webhook setup") + return + } if err := (&bootstrapv1.KubeadmConfig{}).SetupWebhookWithManager(mgr); err != nil { setupLog.Error(err, "unable to create webhook", "webhook", "KubeadmConfig") os.Exit(1) diff --git a/cmd/clusterctl/Dockerfile b/cmd/clusterctl/Dockerfile index 5057608650c0..da12b38adc69 100644 --- a/cmd/clusterctl/Dockerfile +++ b/cmd/clusterctl/Dockerfile @@ -33,6 +33,11 @@ ARG goproxy=https://proxy.golang.org # Run this with docker build --build-arg package=./cmd/clusterctl ENV GOPROXY=$goproxy +# FIPS +ARG CRYPTO_LIB +ENV GOEXPERIMENT=${CRYPTO_LIB:+boringcrypto} + + # Copy the Go Modules manifests COPY go.mod go.mod COPY go.sum go.sum diff --git a/cmd/clusterctl/client/config/cert_manager_client.go b/cmd/clusterctl/client/config/cert_manager_client.go index d8817b8aadc5..179bb31bc641 100644 --- a/cmd/clusterctl/client/config/cert_manager_client.go +++ b/cmd/clusterctl/client/config/cert_manager_client.go @@ -29,7 +29,7 @@ const ( CertManagerConfigKey = "cert-manager" // CertManagerDefaultVersion defines the default cert-manager version to be used by clusterctl. - CertManagerDefaultVersion = "v1.10.0" + CertManagerDefaultVersion = "v1.10.1" // CertManagerDefaultURL defines the default cert-manager repository url to be used by clusterctl. // NOTE: At runtime CertManagerDefaultVersion may be replaced with the diff --git a/cmd/clusterctl/client/config/providers_client.go b/cmd/clusterctl/client/config/providers_client.go index 1eb6fbf9b27a..08233f2856de 100644 --- a/cmd/clusterctl/client/config/providers_client.go +++ b/cmd/clusterctl/client/config/providers_client.go @@ -61,21 +61,24 @@ const ( KubeKeyProviderName = "kubekey" VclusterProviderName = "vcluster" VirtinkProviderName = "virtink" + CoxEdgeProviderName = "coxedge" ) // Bootstrap providers. const ( - KubeadmBootstrapProviderName = "kubeadm" - TalosBootstrapProviderName = "talos" - MicroK8sBootstrapProviderName = "microk8s" + KubeadmBootstrapProviderName = "kubeadm" + TalosBootstrapProviderName = "talos" + MicroK8sBootstrapProviderName = "microk8s" + KubeKeyK3sBootstrapProviderName = "kubekey-k3s" ) // ControlPlane providers. const ( - KubeadmControlPlaneProviderName = "kubeadm" - TalosControlPlaneProviderName = "talos" - MicroK8sControlPlaneProviderName = "microk8s" - NestedControlPlaneProviderName = "nested" + KubeadmControlPlaneProviderName = "kubeadm" + TalosControlPlaneProviderName = "talos" + MicroK8sControlPlaneProviderName = "microk8s" + NestedControlPlaneProviderName = "nested" + KubeKeyK3sControlPlaneProviderName = "kubekey-k3s" ) // Other. @@ -201,6 +204,11 @@ func (p *providersClient) defaults() []Provider { url: "https://github.com/spectrocloud/cluster-api-provider-maas/releases/latest/infrastructure-components.yaml", providerType: clusterctlv1.InfrastructureProviderType, }, + &provider{ + name: CoxEdgeProviderName, + url: "https://github.com/coxedge/cluster-api-provider-coxedge/releases/latest/infrastructure-components.yaml", + providerType: clusterctlv1.InfrastructureProviderType, + }, &provider{ name: BYOHProviderName, url: "https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/latest/infrastructure-components.yaml", @@ -253,6 +261,11 @@ func (p *providersClient) defaults() []Provider { url: "https://github.com/kubernetes-sigs/cluster-api/releases/latest/bootstrap-components.yaml", providerType: clusterctlv1.BootstrapProviderType, }, + &provider{ + name: KubeKeyK3sBootstrapProviderName, + url: "https://github.com/kubesphere/kubekey/releases/latest/bootstrap-components.yaml", + providerType: clusterctlv1.BootstrapProviderType, + }, &provider{ name: TalosBootstrapProviderName, url: "https://github.com/siderolabs/cluster-api-bootstrap-provider-talos/releases/latest/bootstrap-components.yaml", @@ -269,6 +282,11 @@ func (p *providersClient) defaults() []Provider { url: "https://github.com/kubernetes-sigs/cluster-api/releases/latest/control-plane-components.yaml", providerType: clusterctlv1.ControlPlaneProviderType, }, + &provider{ + name: KubeKeyK3sControlPlaneProviderName, + url: "https://github.com/kubesphere/kubekey/releases/latest/control-plane-components.yaml", + providerType: clusterctlv1.ControlPlaneProviderType, + }, &provider{ name: TalosControlPlaneProviderName, url: "https://github.com/siderolabs/cluster-api-control-plane-provider-talos/releases/latest/control-plane-components.yaml", diff --git a/cmd/clusterctl/client/config_test.go b/cmd/clusterctl/client/config_test.go index fe8ebbd5e591..ede509b616f6 100644 --- a/cmd/clusterctl/client/config_test.go +++ b/cmd/clusterctl/client/config_test.go @@ -57,9 +57,11 @@ func Test_clusterctlClient_GetProvidersConfig(t *testing.T) { wantProviders: []string{ config.ClusterAPIProviderName, config.KubeadmBootstrapProviderName, + config.KubeKeyK3sBootstrapProviderName, config.MicroK8sBootstrapProviderName, config.TalosBootstrapProviderName, config.KubeadmControlPlaneProviderName, + config.KubeKeyK3sControlPlaneProviderName, config.MicroK8sControlPlaneProviderName, config.NestedControlPlaneProviderName, config.TalosControlPlaneProviderName, @@ -67,6 +69,7 @@ func Test_clusterctlClient_GetProvidersConfig(t *testing.T) { config.AzureProviderName, config.BYOHProviderName, config.CloudStackProviderName, + config.CoxEdgeProviderName, config.DOProviderName, config.DockerProviderName, config.GCPProviderName, @@ -100,9 +103,11 @@ func Test_clusterctlClient_GetProvidersConfig(t *testing.T) { config.ClusterAPIProviderName, customProviderConfig.Name(), config.KubeadmBootstrapProviderName, + config.KubeKeyK3sBootstrapProviderName, config.MicroK8sBootstrapProviderName, config.TalosBootstrapProviderName, config.KubeadmControlPlaneProviderName, + config.KubeKeyK3sControlPlaneProviderName, config.MicroK8sControlPlaneProviderName, config.NestedControlPlaneProviderName, config.TalosControlPlaneProviderName, @@ -110,6 +115,7 @@ func Test_clusterctlClient_GetProvidersConfig(t *testing.T) { config.AzureProviderName, config.BYOHProviderName, config.CloudStackProviderName, + config.CoxEdgeProviderName, config.DOProviderName, config.DockerProviderName, config.GCPProviderName, diff --git a/cmd/clusterctl/client/repository/goproxy.go b/cmd/clusterctl/client/repository/goproxy.go deleted file mode 100644 index 08e8e0fc1b4e..000000000000 --- a/cmd/clusterctl/client/repository/goproxy.go +++ /dev/null @@ -1,163 +0,0 @@ -/* -Copyright 2022 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package repository - -import ( - "context" - "io" - "net/http" - "net/url" - "path" - "path/filepath" - "sort" - "strings" - - "github.com/blang/semver" - "github.com/pkg/errors" - "k8s.io/apimachinery/pkg/util/wait" -) - -const ( - defaultGoProxyHost = "proxy.golang.org" -) - -type goproxyClient struct { - scheme string - host string -} - -func newGoproxyClient(scheme, host string) *goproxyClient { - return &goproxyClient{ - scheme: scheme, - host: host, - } -} - -func (g *goproxyClient) getVersions(ctx context.Context, base, owner, repository string) ([]string, error) { - // A goproxy is also able to handle the github repository path instead of the actual go module name. - gomodulePath := path.Join(base, owner, repository) - - rawURL := url.URL{ - Scheme: g.scheme, - Host: g.host, - Path: path.Join(gomodulePath, "@v", "/list"), - } - - req, err := http.NewRequestWithContext(ctx, http.MethodGet, rawURL.String(), http.NoBody) - if err != nil { - return nil, errors.Wrapf(err, "failed to get versions: failed to create request") - } - - var rawResponse []byte - var retryError error - _ = wait.PollImmediateWithContext(ctx, retryableOperationInterval, retryableOperationTimeout, func(ctx context.Context) (bool, error) { - retryError = nil - - resp, err := http.DefaultClient.Do(req) - if err != nil { - retryError = errors.Wrapf(err, "failed to get versions: failed to do request") - return false, nil - } - defer resp.Body.Close() - - if resp.StatusCode != 200 { - retryError = errors.Errorf("failed to get versions: response status code %d", resp.StatusCode) - return false, nil - } - - rawResponse, err = io.ReadAll(resp.Body) - if err != nil { - retryError = errors.Wrap(err, "failed to get versions: error reading goproxy response body") - return false, nil - } - return true, nil - }) - if retryError != nil { - return nil, retryError - } - - parsedVersions := semver.Versions{} - for _, s := range strings.Split(string(rawResponse), "\n") { - if s == "" { - continue - } - parsedVersion, err := semver.ParseTolerant(s) - if err != nil { - // Discard releases with tags that are not a valid semantic versions (the user can point explicitly to such releases). - continue - } - parsedVersions = append(parsedVersions, parsedVersion) - } - - sort.Sort(parsedVersions) - - versions := []string{} - for _, v := range parsedVersions { - versions = append(versions, "v"+v.String()) - } - - return versions, nil -} - -// getGoproxyHost detects and returns the scheme and host for goproxy requests. -// It returns empty strings if goproxy is disabled via `off` or `direct` values. -func getGoproxyHost(goproxy string) (string, string, error) { - // Fallback to default - if goproxy == "" { - return "https", defaultGoProxyHost, nil - } - - var goproxyHost, goproxyScheme string - // xref https://github.com/golang/go/blob/master/src/cmd/go/internal/modfetch/proxy.go - for goproxy != "" { - var rawURL string - if i := strings.IndexAny(goproxy, ",|"); i >= 0 { - rawURL = goproxy[:i] - goproxy = goproxy[i+1:] - } else { - rawURL = goproxy - goproxy = "" - } - - rawURL = strings.TrimSpace(rawURL) - if rawURL == "" { - continue - } - if rawURL == "off" || rawURL == "direct" { - // Return nothing to fallback to github repository client without an error. - return "", "", nil - } - - // Single-word tokens are reserved for built-in behaviors, and anything - // containing the string ":/" or matching an absolute file path must be a - // complete URL. For all other paths, implicitly add "https://". - if strings.ContainsAny(rawURL, ".:/") && !strings.Contains(rawURL, ":/") && !filepath.IsAbs(rawURL) && !path.IsAbs(rawURL) { - rawURL = "https://" + rawURL - } - - parsedURL, err := url.Parse(rawURL) - if err != nil { - return "", "", errors.Wrapf(err, "parse GOPROXY url %q", rawURL) - } - goproxyHost = parsedURL.Host - goproxyScheme = parsedURL.Scheme - // A host was found so no need to continue. - break - } - - return goproxyScheme, goproxyHost, nil -} diff --git a/cmd/clusterctl/client/repository/goproxy_test.go b/cmd/clusterctl/client/repository/goproxy_test.go deleted file mode 100644 index 884ef24e362a..000000000000 --- a/cmd/clusterctl/client/repository/goproxy_test.go +++ /dev/null @@ -1,100 +0,0 @@ -/* -Copyright 2022 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package repository - -import ( - "testing" - "time" -) - -func Test_getGoproxyHost(t *testing.T) { - retryableOperationInterval = 200 * time.Millisecond - retryableOperationTimeout = 1 * time.Second - - tests := []struct { - name string - envvar string - wantScheme string - wantHost string - wantErr bool - }{ - { - name: "defaulting", - envvar: "", - wantScheme: "https", - wantHost: "proxy.golang.org", - wantErr: false, - }, - { - name: "direct falls back to empty strings", - envvar: "direct", - wantScheme: "", - wantHost: "", - wantErr: false, - }, - { - name: "off falls back to empty strings", - envvar: "off", - wantScheme: "", - wantHost: "", - wantErr: false, - }, - { - name: "other goproxy", - envvar: "foo.bar.de", - wantScheme: "https", - wantHost: "foo.bar.de", - wantErr: false, - }, - { - name: "other goproxy comma separated, return first", - envvar: "foo.bar,foobar.barfoo", - wantScheme: "https", - wantHost: "foo.bar", - wantErr: false, - }, - { - name: "other goproxy including https scheme", - envvar: "https://foo.bar", - wantScheme: "https", - wantHost: "foo.bar", - wantErr: false, - }, - { - name: "other goproxy including http scheme", - envvar: "http://foo.bar", - wantScheme: "http", - wantHost: "foo.bar", - wantErr: false, - }, - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - gotScheme, gotHost, err := getGoproxyHost(tt.envvar) - if (err != nil) != tt.wantErr { - t.Errorf("getGoproxyHost() error = %v, wantErr %v", err, tt.wantErr) - return - } - if gotScheme != tt.wantScheme { - t.Errorf("getGoproxyHost() = %v, wantScheme %v", gotScheme, tt.wantScheme) - } - if gotHost != tt.wantHost { - t.Errorf("getGoproxyHost() = %v, wantHost %v", gotHost, tt.wantHost) - } - }) - } -} diff --git a/cmd/clusterctl/client/repository/repository_github.go b/cmd/clusterctl/client/repository/repository_github.go index c7f97ae36597..ac747c20792e 100644 --- a/cmd/clusterctl/client/repository/repository_github.go +++ b/cmd/clusterctl/client/repository/repository_github.go @@ -23,10 +23,12 @@ import ( "net/http" "net/url" "os" + "path" "path/filepath" "strings" "time" + "github.com/blang/semver" "github.com/google/go-github/v45/github" "github.com/pkg/errors" "golang.org/x/oauth2" @@ -36,6 +38,7 @@ import ( clusterv1 "sigs.k8s.io/cluster-api/api/v1beta1" "sigs.k8s.io/cluster-api/cmd/clusterctl/client/config" logf "sigs.k8s.io/cluster-api/cmd/clusterctl/log" + "sigs.k8s.io/cluster-api/internal/goproxy" ) const ( @@ -70,7 +73,7 @@ type gitHubRepository struct { rootPath string componentsPath string injectClient *github.Client - injectGoproxyClient *goproxyClient + injectGoproxyClient *goproxy.Client } var _ Repository = &gitHubRepository{} @@ -83,7 +86,7 @@ func injectGithubClient(c *github.Client) githubRepositoryOption { } } -func injectGoproxyClient(c *goproxyClient) githubRepositoryOption { +func injectGoproxyClient(c *goproxy.Client) githubRepositoryOption { return func(g *gitHubRepository) { g.injectGoproxyClient = c } @@ -110,11 +113,20 @@ func (g *gitHubRepository) GetVersions() ([]string, error) { var versions []string if goProxyClient != nil { - versions, err = goProxyClient.getVersions(context.TODO(), githubDomain, g.owner, g.repository) + // A goproxy is also able to handle the github repository path instead of the actual go module name. + gomodulePath := path.Join(githubDomain, g.owner, g.repository) + + var parsedVersions semver.Versions + parsedVersions, err = goProxyClient.GetVersions(context.TODO(), gomodulePath) + // Log the error before fallback to github repository client happens. if err != nil { log.V(5).Info("error using Goproxy client to list versions for repository, falling back to github client", "owner", g.owner, "repository", g.repository, "error", err) } + + for _, v := range parsedVersions { + versions = append(versions, "v"+v.String()) + } } // Fallback to github repository client if goProxyClient is nil or an error occurred. @@ -239,11 +251,11 @@ func (g *gitHubRepository) getClient() *github.Client { // getGoproxyClient returns a go proxy client. // It returns nil, nil if the environment variable is set to `direct` or `off` // to skip goproxy requests. -func (g *gitHubRepository) getGoproxyClient() (*goproxyClient, error) { +func (g *gitHubRepository) getGoproxyClient() (*goproxy.Client, error) { if g.injectGoproxyClient != nil { return g.injectGoproxyClient, nil } - scheme, host, err := getGoproxyHost(os.Getenv("GOPROXY")) + scheme, host, err := goproxy.GetSchemeAndHost(os.Getenv("GOPROXY")) if err != nil { return nil, err } @@ -251,7 +263,7 @@ func (g *gitHubRepository) getGoproxyClient() (*goproxyClient, error) { if scheme == "" && host == "" { return nil, nil } - return newGoproxyClient(scheme, host), nil + return goproxy.NewClient(scheme, host), nil } // setClientToken sets authenticatingHTTPClient field of gitHubRepository struct. diff --git a/cmd/clusterctl/client/repository/repository_github_test.go b/cmd/clusterctl/client/repository/repository_github_test.go index 356523891e83..49c5eac8217a 100644 --- a/cmd/clusterctl/client/repository/repository_github_test.go +++ b/cmd/clusterctl/client/repository/repository_github_test.go @@ -31,6 +31,7 @@ import ( clusterctlv1 "sigs.k8s.io/cluster-api/cmd/clusterctl/api/v1alpha3" "sigs.k8s.io/cluster-api/cmd/clusterctl/client/config" "sigs.k8s.io/cluster-api/cmd/clusterctl/internal/test" + "sigs.k8s.io/cluster-api/internal/goproxy" ) func Test_gitHubRepository_GetVersions(t *testing.T) { @@ -63,6 +64,21 @@ func Test_gitHubRepository_GetVersions(t *testing.T) { fmt.Fprint(w, "v0.3.1\n") }) + // setup an handler for returning 3 different major fake releases + muxGoproxy.HandleFunc("/github.com/o/r3/@v/list", func(w http.ResponseWriter, r *http.Request) { + testMethod(t, r, "GET") + fmt.Fprint(w, "v1.0.0\n") + fmt.Fprint(w, "v0.1.0\n") + }) + muxGoproxy.HandleFunc("/github.com/o/r3/v2/@v/list", func(w http.ResponseWriter, r *http.Request) { + testMethod(t, r, "GET") + fmt.Fprint(w, "v2.0.0\n") + }) + muxGoproxy.HandleFunc("/github.com/o/r3/v3/@v/list", func(w http.ResponseWriter, r *http.Request) { + testMethod(t, r, "GET") + fmt.Fprint(w, "v3.0.0\n") + }) + configVariablesClient := test.NewFakeVariableClient() tests := []struct { @@ -83,6 +99,12 @@ func Test_gitHubRepository_GetVersions(t *testing.T) { want: []string{"v0.3.1", "v0.3.2", "v0.4.0", "v0.5.0"}, wantErr: false, }, + { + name: "use goproxy having multiple majors", + providerConfig: config.NewProvider("test", "https://github.com/o/r3/releases/v3.0.0/path", clusterctlv1.CoreProviderType), + want: []string{"v0.1.0", "v1.0.0", "v2.0.0", "v3.0.0"}, + wantErr: false, + }, { name: "failure", providerConfig: config.NewProvider("test", "https://github.com/o/unknown/releases/v0.4.0/path", clusterctlv1.CoreProviderType), @@ -812,7 +834,7 @@ func resetCaches() { // newFakeGoproxy sets up a test HTTP server along with a github.Client that is // configured to talk to that test server. Tests should register handlers on // mux which provide mock responses for the API method being tested. -func newFakeGoproxy() (client *goproxyClient, mux *http.ServeMux, teardown func()) { +func newFakeGoproxy() (client *goproxy.Client, mux *http.ServeMux, teardown func()) { // mux is the HTTP request multiplexer used with the test server. mux = http.NewServeMux() @@ -824,5 +846,5 @@ func newFakeGoproxy() (client *goproxyClient, mux *http.ServeMux, teardown func( // client is the GitHub client being tested and is configured to use test server. url, _ := url.Parse(server.URL + "/") - return &goproxyClient{scheme: url.Scheme, host: url.Host}, mux, server.Close + return goproxy.NewClient(url.Scheme, url.Host), mux, server.Close } diff --git a/cmd/clusterctl/cmd/config_repositories_test.go b/cmd/clusterctl/cmd/config_repositories_test.go index 13fd40ec075c..040bbb9897bc 100644 --- a/cmd/clusterctl/cmd/config_repositories_test.go +++ b/cmd/clusterctl/cmd/config_repositories_test.go @@ -103,9 +103,11 @@ var expectedOutputText = `NAME TYPE URL cluster-api CoreProvider https://github.com/myorg/myforkofclusterapi/releases/latest/ core_components.yaml another-provider BootstrapProvider ./ bootstrap-components.yaml kubeadm BootstrapProvider https://github.com/kubernetes-sigs/cluster-api/releases/latest/ bootstrap-components.yaml +kubekey-k3s BootstrapProvider https://github.com/kubesphere/kubekey/releases/latest/ bootstrap-components.yaml microk8s BootstrapProvider https://github.com/canonical/cluster-api-bootstrap-provider-microk8s/releases/latest/ bootstrap-components.yaml talos BootstrapProvider https://github.com/siderolabs/cluster-api-bootstrap-provider-talos/releases/latest/ bootstrap-components.yaml kubeadm ControlPlaneProvider https://github.com/kubernetes-sigs/cluster-api/releases/latest/ control-plane-components.yaml +kubekey-k3s ControlPlaneProvider https://github.com/kubesphere/kubekey/releases/latest/ control-plane-components.yaml microk8s ControlPlaneProvider https://github.com/canonical/cluster-api-control-plane-provider-microk8s/releases/latest/ control-plane-components.yaml nested ControlPlaneProvider https://github.com/kubernetes-sigs/cluster-api-provider-nested/releases/latest/ control-plane-components.yaml talos ControlPlaneProvider https://github.com/siderolabs/cluster-api-control-plane-provider-talos/releases/latest/ control-plane-components.yaml @@ -113,6 +115,7 @@ aws InfrastructureProvider azure InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/latest/ infrastructure-components.yaml byoh InfrastructureProvider https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/releases/latest/ infrastructure-components.yaml cloudstack InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/releases/latest/ infrastructure-components.yaml +coxedge InfrastructureProvider https://github.com/coxedge/cluster-api-provider-coxedge/releases/latest/ infrastructure-components.yaml digitalocean InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api-provider-digitalocean/releases/latest/ infrastructure-components.yaml docker InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api/releases/latest/ infrastructure-components-development.yaml gcp InfrastructureProvider https://github.com/kubernetes-sigs/cluster-api-provider-gcp/releases/latest/ infrastructure-components.yaml @@ -148,6 +151,10 @@ var expectedOutputYaml = `- File: core_components.yaml Name: kubeadm ProviderType: BootstrapProvider URL: https://github.com/kubernetes-sigs/cluster-api/releases/latest/ +- File: bootstrap-components.yaml + Name: kubekey-k3s + ProviderType: BootstrapProvider + URL: https://github.com/kubesphere/kubekey/releases/latest/ - File: bootstrap-components.yaml Name: microk8s ProviderType: BootstrapProvider @@ -160,6 +167,10 @@ var expectedOutputYaml = `- File: core_components.yaml Name: kubeadm ProviderType: ControlPlaneProvider URL: https://github.com/kubernetes-sigs/cluster-api/releases/latest/ +- File: control-plane-components.yaml + Name: kubekey-k3s + ProviderType: ControlPlaneProvider + URL: https://github.com/kubesphere/kubekey/releases/latest/ - File: control-plane-components.yaml Name: microk8s ProviderType: ControlPlaneProvider @@ -188,6 +199,10 @@ var expectedOutputYaml = `- File: core_components.yaml Name: cloudstack ProviderType: InfrastructureProvider URL: https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/releases/latest/ +- File: infrastructure-components.yaml + Name: coxedge + ProviderType: InfrastructureProvider + URL: https://github.com/coxedge/cluster-api-provider-coxedge/releases/latest/ - File: infrastructure-components.yaml Name: digitalocean ProviderType: InfrastructureProvider diff --git a/config/default/manager_image_patch.yaml b/config/default/manager_image_patch.yaml index 95f09097b7f8..2593ce58a708 100644 --- a/config/default/manager_image_patch.yaml +++ b/config/default/manager_image_patch.yaml @@ -7,5 +7,5 @@ spec: template: spec: containers: - - image: gcr.io/k8s-staging-cluster-api/cluster-api-controller:main + - image: gcr.io/spectro-dev-public/devop2023/release-fips/cluster-api-controller-amd64:v1.3.2-spectro-4.0.0-dev name: manager diff --git a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_types.go b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_types.go index 8f07c1dd91c6..823fb123f679 100644 --- a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_types.go +++ b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_types.go @@ -60,6 +60,11 @@ type KubeadmControlPlaneSpec struct { Replicas *int32 `json:"replicas,omitempty"` // Version defines the desired Kubernetes version. + // Please note that if kubeadmConfigSpec.ClusterConfiguration.imageRepository is not set + // we don't allow upgrades to versions >= v1.22.0 for which kubeadm uses the old registry (k8s.gcr.io). + // Please use a newer patch version with the new registry instead. The default registries of kubeadm are: + // * registry.k8s.io (new registry): >= v1.22.17, >= v1.23.15, >= v1.24.9, >= v1.25.0 + // * k8s.gcr.io (old registry): all older versions Version string `json:"version"` // MachineTemplate contains information about how machines diff --git a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook.go b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook.go index 285cbeca3099..33603f5931a2 100644 --- a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook.go +++ b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook.go @@ -33,6 +33,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/webhook" bootstrapv1 "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1" + "sigs.k8s.io/cluster-api/internal/util/kubeadm" "sigs.k8s.io/cluster-api/util/container" "sigs.k8s.io/cluster-api/util/version" ) @@ -115,6 +116,7 @@ const ( initConfiguration = "initConfiguration" joinConfiguration = "joinConfiguration" nodeRegistration = "nodeRegistration" + skipPhases = "skipPhases" patches = "patches" directory = "directory" preKubeadmCommands = "preKubeadmCommands" @@ -137,27 +139,28 @@ func (in *KubeadmControlPlane) ValidateUpdate(old runtime.Object) error { // For example, {"spec", "*"} will allow any path under "spec" to change. allowedPaths := [][]string{ {"metadata", "*"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "imageRepository"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "imageTag"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "extraArgs", "*"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "dns", "imageRepository"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "dns", "imageTag"}, - {spec, kubeadmConfigSpec, clusterConfiguration, "imageRepository"}, - {spec, kubeadmConfigSpec, clusterConfiguration, apiServer, "*"}, - {spec, kubeadmConfigSpec, clusterConfiguration, controllerManager, "*"}, - {spec, kubeadmConfigSpec, clusterConfiguration, scheduler, "*"}, - {spec, kubeadmConfigSpec, initConfiguration, nodeRegistration, "*"}, - {spec, kubeadmConfigSpec, initConfiguration, patches, directory}, - {spec, kubeadmConfigSpec, joinConfiguration, nodeRegistration, "*"}, - {spec, kubeadmConfigSpec, joinConfiguration, patches, directory}, - {spec, kubeadmConfigSpec, preKubeadmCommands}, - {spec, kubeadmConfigSpec, postKubeadmCommands}, - {spec, kubeadmConfigSpec, files}, - {spec, kubeadmConfigSpec, "verbosity"}, - {spec, kubeadmConfigSpec, users}, - {spec, kubeadmConfigSpec, ntp, "*"}, - {spec, kubeadmConfigSpec, ignition, "*"}, - {spec, kubeadmConfigSpec, diskSetup, "*"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "imageRepository"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "imageTag"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "etcd", "local", "extraArgs", "*"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "dns", "imageRepository"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "dns", "imageTag"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, "imageRepository"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, apiServer, "*"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, controllerManager, "*"}, + //{spec, kubeadmConfigSpec, clusterConfiguration, scheduler, "*"}, + //{spec, kubeadmConfigSpec, initConfiguration, nodeRegistration, "*"}, + //{spec, kubeadmConfigSpec, initConfiguration, patches, directory}, + //{spec, kubeadmConfigSpec, joinConfiguration, nodeRegistration, "*"}, + //{spec, kubeadmConfigSpec, joinConfiguration, patches, directory}, + //{spec, kubeadmConfigSpec, preKubeadmCommands}, + //{spec, kubeadmConfigSpec, postKubeadmCommands}, + //{spec, kubeadmConfigSpec, files}, + //{spec, kubeadmConfigSpec, "verbosity"}, + //{spec, kubeadmConfigSpec, users}, + //{spec, kubeadmConfigSpec, ntp, "*"}, + //{spec, kubeadmConfigSpec, ignition, "*"}, + // allow all fields to be modified + {spec, kubeadmConfigSpec, "*"}, {spec, "machineTemplate", "metadata", "*"}, {spec, "machineTemplate", "infrastructureRef", "apiVersion"}, {spec, "machineTemplate", "infrastructureRef", "name"}, @@ -598,7 +601,10 @@ func (in *KubeadmControlPlane) validateVersion(previousVersion string) (allErrs return allErrs } - // Since upgrades to the next minor version are allowed, irrespective of the patch version. + // Validate that the update is upgrading at most one minor version. + // Note: Skipping a minor version is not allowed. + // Note: Checking against this ceilVersion allows upgrading to the next minor + // version irrespective of the patch version. ceilVersion := semver.Version{ Major: fromVersion.Major, Minor: fromVersion.Minor + 2, @@ -613,6 +619,31 @@ func (in *KubeadmControlPlane) validateVersion(previousVersion string) (allErrs ) } + // The Kubernetes ecosystem has been requested to move users to the new registry due to cost issues. + // This validation enforces the move to the new registry by forcing users to upgrade to kubeadm versions + // with the new registry. + // NOTE: This only affects users relying on the community maintained registry. + // NOTE: Pinning to the upstream registry is not recommended because it could lead to issues + // given how the migration has been implemented in kubeadm. + // + // Block if imageRepository is not set (i.e. the default registry should be used), + if (in.Spec.KubeadmConfigSpec.ClusterConfiguration == nil || + in.Spec.KubeadmConfigSpec.ClusterConfiguration.ImageRepository == "") && + // the version changed (i.e. we have an upgrade), + toVersion.NE(fromVersion) && + // the version is >= v1.22.0 and < v1.26.0 + toVersion.GTE(kubeadm.MinKubernetesVersionImageRegistryMigration) && + toVersion.LT(kubeadm.NextKubernetesVersionImageRegistryMigration) && + // and the default registry of the new Kubernetes/kubeadm version is the old default registry. + kubeadm.GetDefaultRegistry(toVersion) == kubeadm.OldDefaultImageRepository { + allErrs = append(allErrs, + field.Forbidden( + field.NewPath("spec", "version"), + "cannot upgrade to a Kubernetes/kubeadm version which is using the old default registry. Please use a newer Kubernetes patch release which is using the new default registry (>= v1.22.17, >= v1.23.15, >= v1.24.9)", + ), + ) + } + return allErrs } diff --git a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook_test.go b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook_test.go index d8c52f7ce7d2..3ead11553c28 100644 --- a/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook_test.go +++ b/controlplane/kubeadm/api/v1beta1/kubeadm_control_plane_webhook_test.go @@ -451,14 +451,6 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { kubernetesVersion := before.DeepCopy() kubernetesVersion.Spec.KubeadmConfigSpec.ClusterConfiguration.KubernetesVersion = "some kubernetes version" - prevKCPWithVersion := func(version string) *KubeadmControlPlane { - prev := before.DeepCopy() - prev.Spec.Version = version - return prev - } - skipMinorControlPlaneVersion := prevKCPWithVersion("v1.18.1") - emptyControlPlaneVersion := prevKCPWithVersion("") - controlPlaneEndpoint := before.DeepCopy() controlPlaneEndpoint.Spec.KubeadmConfigSpec.ClusterConfiguration.ControlPlaneEndpoint = "some control plane endpoint" @@ -611,13 +603,6 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { DataDir: "/data", } - disallowedUpgrade118Prev := prevKCPWithVersion("v1.18.8") - disallowedUpgrade119Version := before.DeepCopy() - disallowedUpgrade119Version.Spec.Version = "v1.19.0" - - disallowedUpgrade120AlphaVersion := before.DeepCopy() - disallowedUpgrade120AlphaVersion.Spec.Version = "v1.20.0-alpha.0.734_ba502ee555924a" - updateNTPServers := before.DeepCopy() updateNTPServers.Spec.KubeadmConfigSpec.NTP.Servers = []string{"new-server"} @@ -651,6 +636,12 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { Directory: "/tmp/patches", } + updateInitConfigurationSkipPhases := before.DeepCopy() + updateInitConfigurationSkipPhases.Spec.KubeadmConfigSpec.InitConfiguration.SkipPhases = []string{"addon/kube-proxy"} + + updateJoinConfigurationSkipPhases := before.DeepCopy() + updateJoinConfigurationSkipPhases.Spec.KubeadmConfigSpec.JoinConfiguration.SkipPhases = []string{"addon/kube-proxy"} + updateDiskSetup := before.DeepCopy() updateDiskSetup.Spec.KubeadmConfigSpec.DiskSetup = &bootstrapv1.DiskSetup{ Filesystems: []bootstrapv1.Filesystem{ @@ -919,36 +910,6 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { before: withoutClusterConfiguration, kcp: afterEtcdLocalDirAddition, }, - { - name: "should fail when skipping control plane minor versions", - expectErr: true, - before: before, - kcp: skipMinorControlPlaneVersion, - }, - { - name: "should fail when no control plane version is passed", - expectErr: true, - before: before, - kcp: emptyControlPlaneVersion, - }, - { - name: "should pass if control plane version is the same", - expectErr: false, - before: before, - kcp: before.DeepCopy(), - }, - { - name: "should return error when trying to upgrade to v1.19.0", - expectErr: true, - before: disallowedUpgrade118Prev, - kcp: disallowedUpgrade119Version, - }, - { - name: "should return error when trying to upgrade two minor versions", - expectErr: true, - before: disallowedUpgrade118Prev, - kcp: disallowedUpgrade120AlphaVersion, - }, { name: "should not return an error when maxSurge value is updated to 0", expectErr: false, @@ -985,6 +946,18 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { before: before, kcp: updateJoinConfigurationPatches, }, + { + name: "should allow changes to initConfiguration.skipPhases", + expectErr: false, + before: before, + kcp: updateInitConfigurationSkipPhases, + }, + { + name: "should allow changes to joinConfiguration.skipPhases", + expectErr: false, + before: before, + kcp: updateJoinConfigurationSkipPhases, + }, { name: "should allow changes to diskSetup", expectErr: false, @@ -1033,6 +1006,162 @@ func TestKubeadmControlPlaneValidateUpdate(t *testing.T) { } } +func TestValidateVersion(t *testing.T) { + tests := []struct { + name string + clusterConfiguration *bootstrapv1.ClusterConfiguration + oldVersion string + newVersion string + expectErr bool + }{ + // Basic validation of old and new version. + { + name: "error when old version is empty", + oldVersion: "", + newVersion: "v1.16.6", + expectErr: true, + }, + { + name: "error when old version is invalid", + oldVersion: "invalid-version", + newVersion: "v1.18.1", + expectErr: true, + }, + { + name: "error when new version is empty", + oldVersion: "v1.16.6", + newVersion: "", + expectErr: true, + }, + { + name: "error when new version is invalid", + oldVersion: "v1.18.1", + newVersion: "invalid-version", + expectErr: true, + }, + // Validation that we block upgrade to v1.19.0. + // Note: Upgrading to v1.19.0 is not supported, because of issues in v1.19.0, + // see: https://github.com/kubernetes-sigs/cluster-api/issues/3564 + { + name: "error when upgrading to v1.19.0", + oldVersion: "v1.18.8", + newVersion: "v1.19.0", + expectErr: true, + }, + { + name: "pass when both versions are v1.19.0", + oldVersion: "v1.19.0", + newVersion: "v1.19.0", + expectErr: false, + }, + // Validation for skip-level upgrades. + { + name: "error when upgrading two minor versions", + oldVersion: "v1.18.8", + newVersion: "v1.20.0-alpha.0.734_ba502ee555924a", + expectErr: true, + }, + { + name: "pass when upgrading one minor version", + oldVersion: "v1.20.1", + newVersion: "v1.21.18", + expectErr: false, + }, + // Validation for usage of the old registry. + // Notes: + // * kubeadm versions < v1.22 are always using the old registry. + // * kubeadm versions >= v1.25.0 are always using the new registry. + // * kubeadm versions in between are using the new registry + // starting with certain patch versions. + // This test validates that we don't block upgrades for < v1.22.0 and >= v1.25.0 + // and block upgrades to kubeadm versions in between with the old registry. + { + name: "pass when imageRepository is set", + clusterConfiguration: &bootstrapv1.ClusterConfiguration{ + ImageRepository: "k8s.gcr.io", + }, + oldVersion: "v1.21.1", + newVersion: "v1.22.16", + expectErr: false, + }, + { + name: "pass when version didn't change", + oldVersion: "v1.22.16", + newVersion: "v1.22.16", + expectErr: false, + }, + { + name: "pass when new version is < v1.22.0", + oldVersion: "v1.20.10", + newVersion: "v1.21.5", + expectErr: false, + }, + { + name: "error when new version is using old registry (v1.22.0 <= version <= v1.22.16)", + oldVersion: "v1.21.1", + newVersion: "v1.22.16", // last patch release using old registry + expectErr: true, + }, + { + name: "pass when new version is using new registry (>= v1.22.17)", + oldVersion: "v1.21.1", + newVersion: "v1.22.17", // first patch release using new registry + expectErr: false, + }, + { + name: "error when new version is using old registry (v1.23.0 <= version <= v1.23.14)", + oldVersion: "v1.22.17", + newVersion: "v1.23.14", // last patch release using old registry + expectErr: true, + }, + { + name: "pass when new version is using new registry (>= v1.23.15)", + oldVersion: "v1.22.17", + newVersion: "v1.23.15", // first patch release using new registry + expectErr: false, + }, + { + name: "error when new version is using old registry (v1.24.0 <= version <= v1.24.8)", + oldVersion: "v1.23.1", + newVersion: "v1.24.8", // last patch release using old registry + expectErr: true, + }, + { + name: "pass when new version is using new registry (>= v1.24.9)", + oldVersion: "v1.23.1", + newVersion: "v1.24.9", // first patch release using new registry + expectErr: false, + }, + { + name: "pass when new version is using new registry (>= v1.25.0)", + oldVersion: "v1.24.8", + newVersion: "v1.25.0", // uses new registry + expectErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + g := NewWithT(t) + + kcp := KubeadmControlPlane{ + Spec: KubeadmControlPlaneSpec{ + KubeadmConfigSpec: bootstrapv1.KubeadmConfigSpec{ + ClusterConfiguration: tt.clusterConfiguration, + }, + Version: tt.newVersion, + }, + } + + allErrs := kcp.validateVersion(tt.oldVersion) + if tt.expectErr { + g.Expect(allErrs).ToNot(HaveLen(0)) + } else { + g.Expect(allErrs).To(HaveLen(0)) + } + }) + } +} func TestKubeadmControlPlaneValidateUpdateAfterDefaulting(t *testing.T) { before := &KubeadmControlPlane{ ObjectMeta: metav1.ObjectMeta{ diff --git a/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml b/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml index 533231db6ce0..cdb28bee52a4 100644 --- a/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml +++ b/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml @@ -2701,13 +2701,20 @@ spec: description: FeatureGates enabled by the user. type: object imageRepository: - description: ImageRepository sets the container registry to - pull images from. If empty, `registry.k8s.io` will be used - by default; in case of kubernetes version is a CI build - (kubernetes version starts with `ci/` or `ci-cross/`) `gcr.io/k8s-staging-ci-images` + description: 'ImageRepository sets the container registry + to pull images from. * If not set, the default registry + of kubeadm will be used, i.e. * registry.k8s.io (new registry): + >= v1.22.17, >= v1.23.15, >= v1.24.9, >= v1.25.0 * k8s.gcr.io + (old registry): all older versions Please note that when + imageRepository is not set we don''t allow upgrades to versions + >= v1.22.0 which use the old registry (k8s.gcr.io). Please + use a newer patch version with the new registry instead + (i.e. >= v1.22.17, >= v1.23.15, >= v1.24.9, >= v1.25.0). + * If the version is a CI build (kubernetes version starts + with `ci/` or `ci-cross/`) `gcr.io/k8s-staging-ci-images` will be used as a default for control plane components and for kube-proxy, while `registry.k8s.io` will be used for - all the other images. + all the other images.' type: string kind: description: 'Kind is a string value representing the REST @@ -3639,7 +3646,13 @@ spec: type: string type: object version: - description: Version defines the desired Kubernetes version. + description: 'Version defines the desired Kubernetes version. Please + note that if kubeadmConfigSpec.ClusterConfiguration.imageRepository + is not set we don''t allow upgrades to versions >= v1.22.0 for which + kubeadm uses the old registry (k8s.gcr.io). Please use a newer patch + version with the new registry instead. The default registries of + kubeadm are: * registry.k8s.io (new registry): >= v1.22.17, >= v1.23.15, + >= v1.24.9, >= v1.25.0 * k8s.gcr.io (old registry): all older versions' type: string required: - kubeadmConfigSpec diff --git a/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanetemplates.yaml b/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanetemplates.yaml index 52663f04e22a..ef9869f54f35 100644 --- a/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanetemplates.yaml +++ b/controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanetemplates.yaml @@ -1466,14 +1466,21 @@ spec: description: FeatureGates enabled by the user. type: object imageRepository: - description: ImageRepository sets the container registry - to pull images from. If empty, `registry.k8s.io` - will be used by default; in case of kubernetes version - is a CI build (kubernetes version starts with `ci/` - or `ci-cross/`) `gcr.io/k8s-staging-ci-images` will - be used as a default for control plane components + description: 'ImageRepository sets the container registry + to pull images from. * If not set, the default registry + of kubeadm will be used, i.e. * registry.k8s.io + (new registry): >= v1.22.17, >= v1.23.15, >= v1.24.9, + >= v1.25.0 * k8s.gcr.io (old registry): all older + versions Please note that when imageRepository is + not set we don''t allow upgrades to versions >= + v1.22.0 which use the old registry (k8s.gcr.io). + Please use a newer patch version with the new registry + instead (i.e. >= v1.22.17, >= v1.23.15, >= v1.24.9, + >= v1.25.0). * If the version is a CI build (kubernetes + version starts with `ci/` or `ci-cross/`) `gcr.io/k8s-staging-ci-images` + will be used as a default for control plane components and for kube-proxy, while `registry.k8s.io` will - be used for all the other images. + be used for all the other images.' type: string kind: description: 'Kind is a string value representing diff --git a/controlplane/kubeadm/config/default/manager_image_patch.yaml b/controlplane/kubeadm/config/default/manager_image_patch.yaml index 1a9bb736f695..517c7f6a0134 100644 --- a/controlplane/kubeadm/config/default/manager_image_patch.yaml +++ b/controlplane/kubeadm/config/default/manager_image_patch.yaml @@ -7,5 +7,5 @@ spec: template: spec: containers: - - image: gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller:main + - image: gcr.io/spectro-dev-public/devop2023/release-fips/kubeadm-control-plane-controller-amd64:v1.3.2-spectro-4.0.0-dev name: manager diff --git a/controlplane/kubeadm/internal/cluster_labels.go b/controlplane/kubeadm/internal/cluster_labels.go index cca11a27a354..2e619467377c 100644 --- a/controlplane/kubeadm/internal/cluster_labels.go +++ b/controlplane/kubeadm/internal/cluster_labels.go @@ -19,6 +19,7 @@ package internal import ( clusterv1 "sigs.k8s.io/cluster-api/api/v1beta1" controlplanev1 "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" + capilabels "sigs.k8s.io/cluster-api/internal/labels" ) // ControlPlaneMachineLabelsForCluster returns a set of labels to add to a control plane machine for this specific cluster. @@ -34,5 +35,7 @@ func ControlPlaneMachineLabelsForCluster(kcp *controlplanev1.KubeadmControlPlane // Always force these labels over the ones coming from the spec. labels[clusterv1.ClusterLabelName] = clusterName labels[clusterv1.MachineControlPlaneLabelName] = "" + // Note: MustFormatValue is used here as the label value can be a hash if the control plane name is longer than 63 characters. + labels[clusterv1.MachineControlPlaneNameLabel] = capilabels.MustFormatValue(kcp.Name) return labels } diff --git a/controlplane/kubeadm/internal/controllers/controller.go b/controlplane/kubeadm/internal/controllers/controller.go index a93fda08e739..05d69b52b00b 100644 --- a/controlplane/kubeadm/internal/controllers/controller.go +++ b/controlplane/kubeadm/internal/controllers/controller.go @@ -44,6 +44,7 @@ import ( "sigs.k8s.io/cluster-api/controlplane/kubeadm/internal" expv1 "sigs.k8s.io/cluster-api/exp/api/v1beta1" "sigs.k8s.io/cluster-api/feature" + "sigs.k8s.io/cluster-api/internal/labels" "sigs.k8s.io/cluster-api/util" "sigs.k8s.io/cluster-api/util/annotations" "sigs.k8s.io/cluster-api/util/collections" @@ -312,6 +313,9 @@ func (r *KubeadmControlPlaneReconciler) reconcile(ctx context.Context, cluster * err = r.adoptMachines(ctx, kcp, adoptableMachines, cluster) return ctrl.Result{}, err } + if err := ensureCertificatesOwnerRef(ctx, r.Client, util.ObjectKey(cluster), certificates, *controllerRef); err != nil { + return ctrl.Result{}, err + } ownedMachines := controlPlaneMachines.Filter(collections.OwnedMachines(kcp)) if len(ownedMachines) != len(controlPlaneMachines) { @@ -329,6 +333,21 @@ func (r *KubeadmControlPlaneReconciler) reconcile(ctx context.Context, cluster * // source ref (reason@machine/name) so the problem can be easily tracked down to its source machine. conditions.SetAggregate(controlPlane.KCP, controlplanev1.MachinesReadyCondition, ownedMachines.ConditionGetters(), conditions.AddSourceRef(), conditions.WithStepCounterIf(false)) + // Ensure all required labels exist on the controlled Machines. + // This logic is needed to add the `cluster.x-k8s.io/control-plane-name` label to Machines + // which were created before the `cluster.x-k8s.io/control-plane-name` label was introduced + // or if a user manually removed the label. + // NOTE: Changes will be applied to the Machines in reconcileControlPlaneConditions. + // NOTE: cluster.x-k8s.io/control-plane is already set at this stage (it is used when reading controlPlane.Machines). + for i := range controlPlane.Machines { + machine := controlPlane.Machines[i] + // Note: MustEqualValue and MustFormatValue is used here as the label value can be a hash if the control plane + // name is longer than 63 characters. + if value, ok := machine.Labels[clusterv1.MachineControlPlaneNameLabel]; !ok || !labels.MustEqualValue(kcp.Name, value) { + machine.Labels[clusterv1.MachineControlPlaneNameLabel] = labels.MustFormatValue(kcp.Name) + } + } + // Updates conditions reporting the status of static pods and the status of the etcd cluster. // NOTE: Conditions reporting KCP operation progress like e.g. Resized or SpecUpToDate are inlined with the rest of the execution. if result, err := r.reconcileControlPlaneConditions(ctx, controlPlane); err != nil || !result.IsZero() { @@ -605,6 +624,11 @@ func (r *KubeadmControlPlaneReconciler) reconcileCertificateExpiries(ctx context return ctrl.Result{}, nil } + // Return if KCP is not yet initialized (no API server to contact for checking certificate expiration). + if !controlPlane.KCP.Status.Initialized { + return ctrl.Result{}, nil + } + // Ignore machines which are being deleted. machines := controlPlane.Machines.Filter(collections.Not(collections.HasDeletionTimestamp)) @@ -773,3 +797,33 @@ func (r *KubeadmControlPlaneReconciler) adoptOwnedSecrets(ctx context.Context, k return nil } + +// ensureCertificatesOwnerRef ensures an ownerReference to the owner is added on the Secrets holding certificates. +func ensureCertificatesOwnerRef(ctx context.Context, ctrlclient client.Client, clusterKey client.ObjectKey, certificates secret.Certificates, owner metav1.OwnerReference) error { + for _, c := range certificates { + s := &corev1.Secret{} + secretKey := client.ObjectKey{Namespace: clusterKey.Namespace, Name: secret.Name(clusterKey.Name, c.Purpose)} + if err := ctrlclient.Get(ctx, secretKey, s); err != nil { + return errors.Wrapf(err, "failed to get Secret %s", secretKey) + } + // If the Type doesn't match the type used for secrets created by core components, KCP included + if s.Type != clusterv1.ClusterSecretType { + continue + } + patchHelper, err := patch.NewHelper(s, ctrlclient) + if err != nil { + return errors.Wrapf(err, "failed to create patchHelper for Secret %s", secretKey) + } + + // Remove the current controller if one exists. + if controller := metav1.GetControllerOf(s); controller != nil { + s.SetOwnerReferences(util.RemoveOwnerRef(s.OwnerReferences, *controller)) + } + + s.OwnerReferences = util.EnsureOwnerRef(s.OwnerReferences, owner) + if err := patchHelper.Patch(ctx, s); err != nil { + return errors.Wrapf(err, "failed to patch Secret %s with ownerReference %s", secretKey, owner.String()) + } + } + return nil +} diff --git a/controlplane/kubeadm/internal/controllers/controller_test.go b/controlplane/kubeadm/internal/controllers/controller_test.go index 069d90e1f954..3ec316110214 100644 --- a/controlplane/kubeadm/internal/controllers/controller_test.go +++ b/controlplane/kubeadm/internal/controllers/controller_test.go @@ -18,7 +18,12 @@ package controllers import ( "context" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" "fmt" + "math/big" "sync" "testing" "time" @@ -49,6 +54,7 @@ import ( "sigs.k8s.io/cluster-api/feature" "sigs.k8s.io/cluster-api/internal/test/builder" "sigs.k8s.io/cluster-api/util" + "sigs.k8s.io/cluster-api/util/certs" "sigs.k8s.io/cluster-api/util/collections" "sigs.k8s.io/cluster-api/util/conditions" "sigs.k8s.io/cluster-api/util/kubeconfig" @@ -520,11 +526,12 @@ func TestKubeadmControlPlaneReconciler_adoption(t *testing.T) { g.Expect(machine.GetAnnotations()).NotTo(HaveKey(clusterv1.TemplateClonedFromNameAnnotation)) } }) + t.Run("adopts v1alpha2 cluster secrets", func(t *testing.T) { g := NewWithT(t) cluster, kcp, tmpl := createClusterWithControlPlane(metav1.NamespaceDefault) - cluster.Spec.ControlPlaneEndpoint.Host = "bar" + cluster.Spec.ControlPlaneEndpoint.Host = "validhost" cluster.Spec.ControlPlaneEndpoint.Port = 6443 cluster.Status.InfrastructureReady = true kcp.Spec.Version = version @@ -565,7 +572,7 @@ func TestKubeadmControlPlaneReconciler_adoption(t *testing.T) { }, } - // A simulcrum of the various Certificate and kubeconfig secrets + // A simulacrum of the various Certificate and kubeconfig secrets // it's a little weird that this is one per KubeadmConfig rather than just whichever config was "first," // but the intent is to ensure that the owner is changed regardless of which Machine we start with clusterSecret := &corev1.Secret{ @@ -749,6 +756,161 @@ func TestKubeadmControlPlaneReconciler_adoption(t *testing.T) { }) } +func TestKubeadmControlPlaneReconciler_ensureOwnerReferences(t *testing.T) { + g := NewWithT(t) + + cluster, kcp, tmpl := createClusterWithControlPlane(metav1.NamespaceDefault) + cluster.Spec.ControlPlaneEndpoint.Host = "bar" + cluster.Spec.ControlPlaneEndpoint.Port = 6443 + cluster.Status.InfrastructureReady = true + kcp.Spec.Version = "v1.21.0" + key, err := certs.NewPrivateKey() + g.Expect(err).To(BeNil()) + crt, err := getTestCACert(key) + g.Expect(err).To(BeNil()) + + fmc := &fakeManagementCluster{ + Machines: collections.Machines{}, + Workload: fakeWorkloadCluster{}, + } + + clusterSecret := &corev1.Secret{ + // The Secret's Type is used by KCP to determine whether it is user-provided. + // clusterv1.ClusterSecretType signals that the Secret is CAPI-provided. + ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: "", + Labels: map[string]string{ + "cluster.x-k8s.io/cluster-name": cluster.Name, + "testing": "yes", + }, + }, + Data: map[string][]byte{ + secret.TLSCrtDataName: certs.EncodeCertPEM(crt), + secret.TLSKeyDataName: certs.EncodePrivateKeyPEM(key), + }, + } + + t.Run("add KCP owner for secrets with no controller reference", func(t *testing.T) { + objs := []client.Object{fakeGenericMachineTemplateCRD, cluster.DeepCopy(), kcp.DeepCopy(), tmpl.DeepCopy()} + for _, purpose := range []secret.Purpose{secret.ClusterCA, secret.FrontProxyCA, secret.ServiceAccount, secret.EtcdCA} { + s := clusterSecret.DeepCopy() + // Set the secret name to the purpose + s.Name = secret.Name(cluster.Name, purpose) + // Set the Secret Type to clusterv1.ClusterSecretType which signals this Secret was generated by CAPI. + s.Type = clusterv1.ClusterSecretType + + objs = append(objs, s) + } + + fakeClient := newFakeClient(objs...) + fmc.Reader = fakeClient + r := &KubeadmControlPlaneReconciler{ + Client: fakeClient, + APIReader: fakeClient, + managementCluster: fmc, + managementClusterUncached: fmc, + } + + _, err := r.reconcile(ctx, cluster, kcp) + g.Expect(err).To(BeNil()) + + secrets := &corev1.SecretList{} + g.Expect(fakeClient.List(ctx, secrets, client.InNamespace(cluster.Namespace), client.MatchingLabels{"testing": "yes"})).To(Succeed()) + for _, secret := range secrets.Items { + g.Expect(secret.OwnerReferences).To(ContainElement(*metav1.NewControllerRef(kcp, controlplanev1.GroupVersion.WithKind("KubeadmControlPlane")))) + } + }) + + t.Run("replace non-KCP controller with KCP controller reference", func(t *testing.T) { + objs := []client.Object{fakeGenericMachineTemplateCRD, cluster.DeepCopy(), kcp.DeepCopy(), tmpl.DeepCopy()} + for _, purpose := range []secret.Purpose{secret.ClusterCA, secret.FrontProxyCA, secret.ServiceAccount, secret.EtcdCA} { + s := clusterSecret.DeepCopy() + // Set the secret name to the purpose + s.Name = secret.Name(cluster.Name, purpose) + // Set the Secret Type to clusterv1.ClusterSecretType which signals this Secret was generated by CAPI. + s.Type = clusterv1.ClusterSecretType + + // Set the a controller owner reference of an unknown type on the secret. + s.SetOwnerReferences([]metav1.OwnerReference{ + { + APIVersion: bootstrapv1.GroupVersion.String(), + // KCP should take ownership of any Secret of the correct type linked to the Cluster. + Kind: "OtherController", + Name: "name", + UID: "uid", + Controller: pointer.Bool(true), + }, + }) + objs = append(objs, s) + } + + fakeClient := newFakeClient(objs...) + fmc.Reader = fakeClient + r := &KubeadmControlPlaneReconciler{ + Client: fakeClient, + APIReader: fakeClient, + managementCluster: fmc, + managementClusterUncached: fmc, + } + + _, err := r.reconcile(ctx, cluster, kcp) + g.Expect(err).To(BeNil()) + + secrets := &corev1.SecretList{} + g.Expect(fakeClient.List(ctx, secrets, client.InNamespace(cluster.Namespace), client.MatchingLabels{"testing": "yes"})).To(Succeed()) + for _, secret := range secrets.Items { + g.Expect(secret.OwnerReferences).To(HaveLen(1)) + g.Expect(secret.OwnerReferences).To(ContainElement(*metav1.NewControllerRef(kcp, controlplanev1.GroupVersion.WithKind("KubeadmControlPlane")))) + } + }) + + t.Run("does not add owner reference to user-provided secrets", func(t *testing.T) { + g := NewWithT(t) + objs := []client.Object{fakeGenericMachineTemplateCRD, cluster.DeepCopy(), kcp.DeepCopy(), tmpl.DeepCopy()} + for _, purpose := range []secret.Purpose{secret.ClusterCA, secret.FrontProxyCA, secret.ServiceAccount, secret.EtcdCA} { + s := clusterSecret.DeepCopy() + // Set the secret name to the purpose + s.Name = secret.Name(cluster.Name, purpose) + // Set the Secret Type to any type which signals this Secret was is user-provided. + s.Type = corev1.SecretTypeOpaque + // Set the a controller owner reference of an unknown type on the secret. + s.SetOwnerReferences([]metav1.OwnerReference{ + { + APIVersion: bootstrapv1.GroupVersion.String(), + // This owner reference to a different controller should be preserved. + Kind: "OtherController", + Name: kcp.Name, + UID: kcp.UID, + Controller: pointer.Bool(true), + BlockOwnerDeletion: pointer.Bool(true), + }, + }) + + objs = append(objs, s) + } + + fakeClient := newFakeClient(objs...) + fmc.Reader = fakeClient + r := &KubeadmControlPlaneReconciler{ + Client: fakeClient, + APIReader: fakeClient, + managementCluster: fmc, + managementClusterUncached: fmc, + } + + _, err := r.reconcile(ctx, cluster, kcp) + g.Expect(err).To(BeNil()) + + secrets := &corev1.SecretList{} + g.Expect(fakeClient.List(ctx, secrets, client.InNamespace(cluster.Namespace), client.MatchingLabels{"testing": "yes"})).To(Succeed()) + for _, secret := range secrets.Items { + g.Expect(secret.OwnerReferences).To(HaveLen(1)) + g.Expect(secret.OwnerReferences).To(ContainElement(*metav1.NewControllerRef(kcp, bootstrapv1.GroupVersion.WithKind("OtherController")))) + } + }) +} + func TestReconcileCertificateExpiries(t *testing.T) { g := NewWithT(t) @@ -756,7 +918,9 @@ func TestReconcileCertificateExpiries(t *testing.T) { detectedExpiry := time.Now().Add(25 * 24 * time.Hour) cluster := newCluster(&types.NamespacedName{Name: "foo", Namespace: metav1.NamespaceDefault}) - kcp := &controlplanev1.KubeadmControlPlane{} + kcp := &controlplanev1.KubeadmControlPlane{ + Status: controlplanev1.KubeadmControlPlaneStatus{Initialized: true}, + } machineWithoutExpiryAnnotation := &clusterv1.Machine{ ObjectMeta: metav1.ObjectMeta{ Name: "machineWithoutExpiryAnnotation", @@ -1769,3 +1933,34 @@ func newCluster(namespacedName *types.NamespacedName) *clusterv1.Cluster { }, } } + +func getTestCACert(key *rsa.PrivateKey) (*x509.Certificate, error) { + cfg := certs.Config{ + CommonName: "kubernetes", + } + + now := time.Now().UTC() + + tmpl := x509.Certificate{ + SerialNumber: new(big.Int).SetInt64(0), + Subject: pkix.Name{ + CommonName: cfg.CommonName, + Organization: cfg.Organization, + }, + NotBefore: now.Add(time.Minute * -5), + NotAfter: now.Add(time.Hour * 24), // 1 day + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, + MaxPathLenZero: true, + BasicConstraintsValid: true, + MaxPathLen: 0, + IsCA: true, + } + + b, err := x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, key.Public(), key) + if err != nil { + return nil, err + } + + c, err := x509.ParseCertificate(b) + return c, err +} diff --git a/controlplane/kubeadm/internal/controllers/helpers.go b/controlplane/kubeadm/internal/controllers/helpers.go index e1e800555215..3355f942409c 100644 --- a/controlplane/kubeadm/internal/controllers/helpers.go +++ b/controlplane/kubeadm/internal/controllers/helpers.go @@ -74,12 +74,8 @@ func (r *KubeadmControlPlaneReconciler) reconcileKubeconfig(ctx context.Context, return ctrl.Result{}, errors.Wrap(err, "failed to retrieve kubeconfig Secret") } - // check if the kubeconfig secret was created by v1alpha2 controllers, and thus it has the Cluster as the owner instead of KCP; - // if yes, adopt it. - if util.IsOwnedByObject(configSecret, cluster) && !util.IsControlledBy(configSecret, kcp) { - if err := r.adoptKubeconfigSecret(ctx, cluster, configSecret, controllerOwnerRef); err != nil { - return ctrl.Result{}, err - } + if err := r.adoptKubeconfigSecret(ctx, cluster, configSecret, kcp); err != nil { + return ctrl.Result{}, err } // only do rotation on owned secrets @@ -102,21 +98,45 @@ func (r *KubeadmControlPlaneReconciler) reconcileKubeconfig(ctx context.Context, return ctrl.Result{}, nil } -func (r *KubeadmControlPlaneReconciler) adoptKubeconfigSecret(ctx context.Context, cluster *clusterv1.Cluster, configSecret *corev1.Secret, controllerOwnerRef metav1.OwnerReference) error { +// Ensure the KubeadmConfigSecret has an owner reference to the control plane if it is not a user-provided secret. +func (r *KubeadmControlPlaneReconciler) adoptKubeconfigSecret(ctx context.Context, cluster *clusterv1.Cluster, configSecret *corev1.Secret, kcp *controlplanev1.KubeadmControlPlane) error { log := ctrl.LoggerFrom(ctx) - log.Info("Adopting KubeConfig secret created by v1alpha2 controllers", "Secret", klog.KObj(configSecret)) + controller := metav1.GetControllerOf(configSecret) + // If the Type doesn't match the CAPI-created secret type this is a no-op. + if configSecret.Type != clusterv1.ClusterSecretType { + return nil + } + // If the secret is already controlled by KCP this is a no-op. + if controller != nil && controller.Kind == "KubeadmControlPlane" { + return nil + } + log.Info("Adopting KubeConfig secret", "Secret", klog.KObj(configSecret)) patch, err := patch.NewHelper(configSecret, r.Client) if err != nil { return errors.Wrap(err, "failed to create patch helper for the kubeconfig secret") } - configSecret.OwnerReferences = util.RemoveOwnerRef(configSecret.OwnerReferences, metav1.OwnerReference{ - APIVersion: clusterv1.GroupVersion.String(), - Kind: "Cluster", - Name: cluster.Name, - UID: cluster.UID, - }) - configSecret.OwnerReferences = util.EnsureOwnerRef(configSecret.OwnerReferences, controllerOwnerRef) + + // If the kubeconfig secret was created by v1alpha2 controllers, and thus it has the Cluster as the owner instead of KCP. + // In this case remove the ownerReference to the Cluster. + if util.IsOwnedByObject(configSecret, cluster) { + configSecret.SetOwnerReferences(util.RemoveOwnerRef(configSecret.OwnerReferences, metav1.OwnerReference{ + APIVersion: clusterv1.GroupVersion.String(), + Kind: "Cluster", + Name: cluster.Name, + UID: cluster.UID, + })) + } + + // Remove the current controller if one exists. + if controller != nil { + configSecret.SetOwnerReferences(util.RemoveOwnerRef(configSecret.OwnerReferences, *controller)) + } + + // Add the KubeadmControlPlane as the controller for this secret. + configSecret.OwnerReferences = util.EnsureOwnerRef(configSecret.OwnerReferences, + *metav1.NewControllerRef(kcp, controlplanev1.GroupVersion.WithKind("KubeadmControlPlane"))) + if err := patch.Patch(ctx, configSecret); err != nil { return errors.Wrap(err, "failed to patch the kubeconfig secret") } @@ -275,6 +295,7 @@ func (r *KubeadmControlPlaneReconciler) generateMachine(ctx context.Context, kcp Namespace: kcp.Namespace, Labels: internal.ControlPlaneMachineLabelsForCluster(kcp, cluster.Name), Annotations: map[string]string{}, + // Note: by setting the ownerRef on creation we signal to the Machine controller that this is not a stand-alone Machine. OwnerReferences: []metav1.OwnerReference{ *metav1.NewControllerRef(kcp, controlplanev1.GroupVersion.WithKind("KubeadmControlPlane")), }, @@ -286,13 +307,12 @@ func (r *KubeadmControlPlaneReconciler) generateMachine(ctx context.Context, kcp Bootstrap: clusterv1.Bootstrap{ ConfigRef: bootstrapRef, }, - FailureDomain: failureDomain, - NodeDrainTimeout: kcp.Spec.MachineTemplate.NodeDrainTimeout, + FailureDomain: failureDomain, + NodeDrainTimeout: kcp.Spec.MachineTemplate.NodeDrainTimeout, + NodeDeletionTimeout: kcp.Spec.MachineTemplate.NodeDeletionTimeout, + NodeVolumeDetachTimeout: kcp.Spec.MachineTemplate.NodeVolumeDetachTimeout, }, } - if kcp.Spec.MachineTemplate.NodeDeletionTimeout != nil { - machine.Spec.NodeDeletionTimeout = kcp.Spec.MachineTemplate.NodeDeletionTimeout - } // Machine's bootstrap config may be missing ClusterConfiguration if it is not the first machine in the control plane. // We store ClusterConfiguration as annotation here to detect any changes in KCP ClusterConfiguration and rollout the machine if any. diff --git a/controlplane/kubeadm/internal/controllers/helpers_test.go b/controlplane/kubeadm/internal/controllers/helpers_test.go index 5b8a132d8e2c..0aafd08a318a 100644 --- a/controlplane/kubeadm/internal/controllers/helpers_test.go +++ b/controlplane/kubeadm/internal/controllers/helpers_test.go @@ -18,13 +18,14 @@ package controllers import ( "testing" + "time" . "github.com/onsi/gomega" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/client-go/tools/record" - utilpointer "k8s.io/utils/pointer" + "k8s.io/utils/pointer" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/client" @@ -33,6 +34,7 @@ import ( "sigs.k8s.io/cluster-api/controllers/external" controlplanev1 "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" "sigs.k8s.io/cluster-api/controlplane/kubeadm/internal" + "sigs.k8s.io/cluster-api/internal/test/builder" "sigs.k8s.io/cluster-api/util/conditions" "sigs.k8s.io/cluster-api/util/kubeconfig" "sigs.k8s.io/cluster-api/util/secret" @@ -237,11 +239,22 @@ func TestReconcileKubeconfigSecretDoesNotAdoptsUserSecrets(t *testing.T) { }, } - existingKubeconfigSecret := kubeconfig.GenerateSecretWithOwner( - client.ObjectKey{Name: "foo", Namespace: metav1.NamespaceDefault}, - []byte{}, - metav1.OwnerReference{}, // user defined secrets are not owned by the cluster. - ) + existingKubeconfigSecret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: secret.Name("foo", secret.Kubeconfig), + Namespace: metav1.NamespaceDefault, + Labels: map[string]string{ + clusterv1.ClusterLabelName: "foo", + }, + OwnerReferences: []metav1.OwnerReference{}, + }, + Data: map[string][]byte{ + secret.KubeconfigDataName: {}, + }, + // KCP identifies CAPI-created Secrets using the clusterv1.ClusterSecretType. Setting any other type allows + // the controllers to treat it as a user-provided Secret. + Type: corev1.SecretTypeOpaque, + } fakeClient := newFakeClient(kcp.DeepCopy(), existingKubeconfigSecret.DeepCopy()) r := &KubeadmControlPlaneReconciler{ @@ -503,7 +516,10 @@ func TestKubeadmControlPlaneReconciler_generateMachine(t *testing.T) { Spec: controlplanev1.KubeadmControlPlaneSpec{ Version: "v1.16.6", MachineTemplate: controlplanev1.KubeadmControlPlaneMachineTemplate{ - ObjectMeta: kcpMachineTemplateObjectMeta, + ObjectMeta: kcpMachineTemplateObjectMeta, + NodeVolumeDetachTimeout: &metav1.Duration{Duration: 10 * time.Second}, + NodeDeletionTimeout: &metav1.Duration{Duration: 10 * time.Second}, + NodeDrainTimeout: &metav1.Duration{Duration: 10 * time.Second}, }, }, } @@ -522,11 +538,14 @@ func TestKubeadmControlPlaneReconciler_generateMachine(t *testing.T) { } expectedMachineSpec := clusterv1.MachineSpec{ ClusterName: cluster.Name, - Version: utilpointer.String(kcp.Spec.Version), + Version: pointer.String(kcp.Spec.Version), Bootstrap: clusterv1.Bootstrap{ ConfigRef: bootstrapRef.DeepCopy(), }, - InfrastructureRef: *infraRef.DeepCopy(), + InfrastructureRef: *infraRef.DeepCopy(), + NodeVolumeDetachTimeout: &metav1.Duration{Duration: 10 * time.Second}, + NodeDeletionTimeout: &metav1.Duration{Duration: 10 * time.Second}, + NodeDrainTimeout: &metav1.Duration{Duration: 10 * time.Second}, } r := &KubeadmControlPlaneReconciler{ Client: fakeClient, @@ -549,6 +568,10 @@ func TestKubeadmControlPlaneReconciler_generateMachine(t *testing.T) { for k, v := range kcpMachineTemplateObjectMeta.Labels { g.Expect(machine.Labels[k]).To(Equal(v)) } + g.Expect(machine.Labels[clusterv1.ClusterLabelName]).To(Equal(cluster.Name)) + g.Expect(machine.Labels[clusterv1.MachineControlPlaneLabelName]).To(Equal("")) + g.Expect(machine.Labels[clusterv1.MachineControlPlaneNameLabel]).To(Equal(kcp.Name)) + for k, v := range kcpMachineTemplateObjectMeta.Annotations { g.Expect(machine.Annotations[k]).To(Equal(v)) } @@ -556,6 +579,7 @@ func TestKubeadmControlPlaneReconciler_generateMachine(t *testing.T) { // Verify that machineTemplate.ObjectMeta in KCP has not been modified. g.Expect(kcp.Spec.MachineTemplate.ObjectMeta.Labels).NotTo(HaveKey(clusterv1.ClusterLabelName)) g.Expect(kcp.Spec.MachineTemplate.ObjectMeta.Labels).NotTo(HaveKey(clusterv1.MachineControlPlaneLabelName)) + g.Expect(kcp.Spec.MachineTemplate.ObjectMeta.Labels).NotTo(HaveKey(clusterv1.MachineControlPlaneNameLabel)) g.Expect(kcp.Spec.MachineTemplate.ObjectMeta.Annotations).NotTo(HaveKey(controlplanev1.KubeadmClusterConfigurationAnnotation)) } @@ -606,3 +630,103 @@ func TestKubeadmControlPlaneReconciler_generateKubeadmConfig(t *testing.T) { g.Expect(bootstrapConfig.OwnerReferences).To(ContainElement(expectedOwner)) g.Expect(bootstrapConfig.Spec).To(Equal(spec)) } + +func TestKubeadmControlPlaneReconciler_adoptKubeconfigSecret(t *testing.T) { + g := NewWithT(t) + otherOwner := metav1.OwnerReference{ + Name: "testcontroller", + UID: "5", + Kind: "OtherController", + APIVersion: clusterv1.GroupVersion.String(), + Controller: pointer.Bool(true), + BlockOwnerDeletion: pointer.Bool(true), + } + clusterName := "test1" + cluster := builder.Cluster(metav1.NamespaceDefault, clusterName).Build() + + // A KubeadmConfig secret created by CAPI controllers with no owner references. + capiKubeadmConfigSecretNoOwner := kubeconfig.GenerateSecretWithOwner( + client.ObjectKey{Name: clusterName, Namespace: metav1.NamespaceDefault}, + []byte{}, + metav1.OwnerReference{}) + capiKubeadmConfigSecretNoOwner.OwnerReferences = []metav1.OwnerReference{} + + // A KubeadmConfig secret created by CAPI controllers with a non-KCP owner reference. + capiKubeadmConfigSecretOtherOwner := capiKubeadmConfigSecretNoOwner.DeepCopy() + capiKubeadmConfigSecretOtherOwner.OwnerReferences = []metav1.OwnerReference{otherOwner} + + // A user provided KubeadmConfig secret with no owner reference. + userProvidedKubeadmConfigSecretNoOwner := kubeconfig.GenerateSecretWithOwner( + client.ObjectKey{Name: clusterName, Namespace: metav1.NamespaceDefault}, + []byte{}, + metav1.OwnerReference{}) + userProvidedKubeadmConfigSecretNoOwner.Type = corev1.SecretTypeOpaque + + // A user provided KubeadmConfig with a non-KCP owner reference. + userProvidedKubeadmConfigSecretOtherOwner := userProvidedKubeadmConfigSecretNoOwner.DeepCopy() + userProvidedKubeadmConfigSecretOtherOwner.OwnerReferences = []metav1.OwnerReference{otherOwner} + + kcp := &controlplanev1.KubeadmControlPlane{ + TypeMeta: metav1.TypeMeta{ + Kind: "KubeadmControlPlane", + APIVersion: controlplanev1.GroupVersion.String(), + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "testControlPlane", + Namespace: cluster.Namespace, + }, + } + tests := []struct { + name string + configSecret *corev1.Secret + expectedOwnerRef metav1.OwnerReference + }{ + { + name: "add KCP owner reference on kubeconfig secret generated by CAPI", + configSecret: capiKubeadmConfigSecretNoOwner, + expectedOwnerRef: metav1.OwnerReference{ + Name: kcp.Name, + UID: kcp.UID, + Kind: kcp.Kind, + APIVersion: kcp.APIVersion, + Controller: pointer.Bool(true), + BlockOwnerDeletion: pointer.Bool(true), + }, + }, + { + name: "replace owner reference with KCP on kubeconfig secret generated by CAPI with other owner", + configSecret: capiKubeadmConfigSecretOtherOwner, + expectedOwnerRef: metav1.OwnerReference{ + Name: kcp.Name, + UID: kcp.UID, + Kind: kcp.Kind, + APIVersion: kcp.APIVersion, + Controller: pointer.Bool(true), + BlockOwnerDeletion: pointer.Bool(true), + }, + }, + { + name: "don't add ownerReference on kubeconfig secret provided by user", + configSecret: userProvidedKubeadmConfigSecretNoOwner, + expectedOwnerRef: metav1.OwnerReference{}, + }, + { + name: "don't replace ownerReference on kubeconfig secret provided by user", + configSecret: userProvidedKubeadmConfigSecretOtherOwner, + expectedOwnerRef: otherOwner, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + fakeClient := newFakeClient(cluster, kcp, tt.configSecret) + r := &KubeadmControlPlaneReconciler{ + APIReader: fakeClient, + Client: fakeClient, + } + g.Expect(r.adoptKubeconfigSecret(ctx, cluster, tt.configSecret, kcp)).To(Succeed()) + actualSecret := &corev1.Secret{} + g.Expect(fakeClient.Get(ctx, client.ObjectKey{Namespace: tt.configSecret.Namespace, Name: tt.configSecret.Namespace}, actualSecret)) + g.Expect(tt.configSecret.GetOwnerReferences()).To(ConsistOf(tt.expectedOwnerRef)) + }) + } +} diff --git a/controlplane/kubeadm/internal/workload_cluster.go b/controlplane/kubeadm/internal/workload_cluster.go index 64e9477bb452..99f3fc1b3524 100644 --- a/controlplane/kubeadm/internal/workload_cluster.go +++ b/controlplane/kubeadm/internal/workload_cluster.go @@ -47,6 +47,7 @@ import ( kubeadmtypes "sigs.k8s.io/cluster-api/bootstrap/kubeadm/types" controlplanev1 "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" "sigs.k8s.io/cluster-api/controlplane/kubeadm/internal/proxy" + "sigs.k8s.io/cluster-api/internal/util/kubeadm" "sigs.k8s.io/cluster-api/util" "sigs.k8s.io/cluster-api/util/certs" containerutil "sigs.k8s.io/cluster-api/util/container" @@ -85,14 +86,6 @@ var ( // NOTE: The following assumes that kubeadm version equals to Kubernetes version. minVerUnversionedKubeletConfig = semver.MustParse("1.24.0") - // minKubernetesVersionImageRegistryMigration is first kubernetes version where - // the default image registry is registry.k8s.io instead of k8s.gcr.io. - minKubernetesVersionImageRegistryMigration = semver.MustParse("1.22.0") - - // nextKubernetesVersionImageRegistryMigration is the next minor version after - // the default image registry changed to registry.k8s.io. - nextKubernetesVersionImageRegistryMigration = semver.MustParse("1.26.0") - // ErrControlPlaneMinNodes signals that a cluster doesn't meet the minimum required nodes // to remove an etcd member. ErrControlPlaneMinNodes = errors.New("cluster has fewer than 2 control plane nodes; removing an etcd member is not supported") @@ -623,12 +616,15 @@ func yamlToUnstructured(rawYAML []byte) (*unstructured.Unstructured, error) { } // ImageRepositoryFromClusterConfig returns the image repository to use. It returns: -// * clusterConfig.ImageRepository if set. -// * "registry.k8s.io" if v1.22 <= version < v1.26 to migrate to the new registry -// * "" otherwise. -// Beginning with kubernetes v1.22, the default registry for kubernetes is registry.k8s.io -// instead of k8s.gcr.io which is why references should get migrated when upgrading to v1.22. -// The migration follows the behavior of `kubeadm upgrade`. +// - clusterConfig.ImageRepository if set. +// - else either k8s.gcr.io or registry.k8s.io depending on the default registry of the kubeadm +// binary of the given kubernetes version. This is only done for Kubernetes versions >= v1.22.0 +// and < v1.26.0 because in this version range the default registry was changed. +// +// Note: Please see the following issue for more context: https://github.com/kubernetes-sigs/cluster-api/issues/7833 +// tl;dr is that the imageRepository must be in sync with the default registry of kubeadm. +// Otherwise kubeadm preflight checks will fail because kubeadm is trying to pull the CoreDNS image +// from the wrong repository (/coredns instead of /coredns/coredns). func ImageRepositoryFromClusterConfig(clusterConfig *bootstrapv1.ClusterConfiguration, kubernetesVersion semver.Version) string { // If ImageRepository is explicitly specified, return early. if clusterConfig != nil && @@ -636,11 +632,11 @@ func ImageRepositoryFromClusterConfig(clusterConfig *bootstrapv1.ClusterConfigur return clusterConfig.ImageRepository } - // If v1.22 <= version < v1.26 return the default Kubernetes image repository to - // migrate to the new location and not cause changes else. - if kubernetesVersion.GTE(minKubernetesVersionImageRegistryMigration) && - kubernetesVersion.LT(nextKubernetesVersionImageRegistryMigration) { - return kubernetesImageRepository + // If v1.22.0 <= version < v1.26.0 return the default registry of the + // corresponding kubeadm binary. + if kubernetesVersion.GTE(kubeadm.MinKubernetesVersionImageRegistryMigration) && + kubernetesVersion.LT(kubeadm.NextKubernetesVersionImageRegistryMigration) { + return kubeadm.GetDefaultRegistry(kubernetesVersion) } // Use defaulting or current values otherwise. diff --git a/controlplane/kubeadm/internal/workload_cluster_coredns.go b/controlplane/kubeadm/internal/workload_cluster_coredns.go index a105a61a89a4..9ebf7eda9950 100644 --- a/controlplane/kubeadm/internal/workload_cluster_coredns.go +++ b/controlplane/kubeadm/internal/workload_cluster_coredns.go @@ -35,6 +35,7 @@ import ( bootstrapv1 "sigs.k8s.io/cluster-api/bootstrap/kubeadm/api/v1beta1" controlplanev1 "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" + "sigs.k8s.io/cluster-api/internal/util/kubeadm" containerutil "sigs.k8s.io/cluster-api/util/container" "sigs.k8s.io/cluster-api/util/patch" "sigs.k8s.io/cluster-api/util/version" @@ -47,11 +48,8 @@ const ( coreDNSVolumeKey = "config-volume" coreDNSClusterRoleName = "system:coredns" - // kubernetesImageRepository is the default Kubernetes image repository for build artifacts. - kubernetesImageRepository = "registry.k8s.io" - oldKubernetesImageRepository = "k8s.gcr.io" - oldCoreDNSImageName = "coredns" - coreDNSImageName = "coredns/coredns" + oldCoreDNSImageName = "coredns" + coreDNSImageName = "coredns/coredns" oldControlPlaneTaint = "node-role.kubernetes.io/master" // Deprecated: https://github.com/kubernetes/kubeadm/issues/2200 controlPlaneTaint = "node-role.kubernetes.io/control-plane" @@ -200,11 +198,11 @@ func (w *Workload) getCoreDNSInfo(ctx context.Context, clusterConfig *bootstrapv toImageRepository := parsedImage.Repository // Overwrite the image repository if a value was explicitly set or an upgrade is required. if imageRegistryRepository := ImageRepositoryFromClusterConfig(clusterConfig, version); imageRegistryRepository != "" { - if imageRegistryRepository == kubernetesImageRepository { - // Only patch to KubernetesImageRepository if oldKubernetesImageRepository is set as prefix. - if strings.HasPrefix(toImageRepository, oldKubernetesImageRepository) { - // Ensure to keep the repository subpaths when patching from oldKubernetesImageRepository to new KubernetesImageRepository. - toImageRepository = strings.TrimSuffix(imageRegistryRepository+strings.TrimPrefix(toImageRepository, oldKubernetesImageRepository), "/") + if imageRegistryRepository == kubeadm.DefaultImageRepository { + // Only patch to DefaultImageRepository if OldDefaultImageRepository is set as prefix. + if strings.HasPrefix(toImageRepository, kubeadm.OldDefaultImageRepository) { + // Ensure to keep the repository subpaths when patching from OldDefaultImageRepository to new DefaultImageRepository. + toImageRepository = strings.TrimSuffix(imageRegistryRepository+strings.TrimPrefix(toImageRepository, kubeadm.OldDefaultImageRepository), "/") } } else { toImageRepository = strings.TrimSuffix(imageRegistryRepository, "/") @@ -235,7 +233,7 @@ func (w *Workload) getCoreDNSInfo(ctx context.Context, clusterConfig *bootstrapv // * "registry.k8s.io/coredns" to "registry.k8s.io/coredns/coredns" or // * "k8s.gcr.io/coredns" to "k8s.gcr.io/coredns/coredns" toImageName := parsedImage.Name - if (toImageRepository == oldKubernetesImageRepository || toImageRepository == kubernetesImageRepository) && + if (toImageRepository == kubeadm.OldDefaultImageRepository || toImageRepository == kubeadm.DefaultImageRepository) && toImageName == oldCoreDNSImageName && targetMajorMinorPatch.GTE(semver.MustParse("1.8.0")) { toImageName = coreDNSImageName } diff --git a/controlplane/kubeadm/internal/workload_cluster_coredns_test.go b/controlplane/kubeadm/internal/workload_cluster_coredns_test.go index 2436bc024b5e..bac6124e733b 100644 --- a/controlplane/kubeadm/internal/workload_cluster_coredns_test.go +++ b/controlplane/kubeadm/internal/workload_cluster_coredns_test.go @@ -1213,8 +1213,10 @@ func TestGetCoreDNSInfo(t *testing.T) { }, }, { - name: "rename to coredns/coredns when upgrading to coredns=1.8.0 and kubernetesVersion=1.21.0", - objs: []client.Object{newCoreDNSInfoDeploymentWithimage(image162), cm}, + name: "rename to coredns/coredns when upgrading to coredns=1.8.0 and kubernetesVersion=1.22.16", + // 1.22.16 uses k8s.gcr.io as default registry. Thus the registry doesn't get changed as + // FromImage is already using k8s.gcr.io. + objs: []client.Object{newCoreDNSInfoDeploymentWithimage("k8s.gcr.io/coredns:1.6.2"), cm}, clusterConfig: &bootstrapv1.ClusterConfiguration{ DNS: bootstrapv1.DNS{ ImageMeta: bootstrapv1.ImageMeta{ @@ -1222,18 +1224,42 @@ func TestGetCoreDNSInfo(t *testing.T) { }, }, }, - kubernetesVersion: semver.MustParse("1.21.0"), + kubernetesVersion: semver.MustParse("1.22.16"), expectedInfo: coreDNSInfo{ CurrentMajorMinorPatch: "1.6.2", FromImageTag: "1.6.2", TargetMajorMinorPatch: "1.8.0", - FromImage: image162, + FromImage: "k8s.gcr.io/coredns:1.6.2", ToImage: "k8s.gcr.io/coredns/coredns:1.8.0", ToImageTag: "1.8.0", }, }, + { + name: "rename to coredns/coredns when upgrading to coredns=1.8.0 and kubernetesVersion=1.22.17", + // 1.22.17 has registry.k8s.io as default registry. Thus the registry gets changed as + // FromImage is using k8s.gcr.io. + objs: []client.Object{newCoreDNSInfoDeploymentWithimage("k8s.gcr.io/coredns:1.6.2"), cm}, + clusterConfig: &bootstrapv1.ClusterConfiguration{ + DNS: bootstrapv1.DNS{ + ImageMeta: bootstrapv1.ImageMeta{ + ImageTag: "1.8.0", + }, + }, + }, + kubernetesVersion: semver.MustParse("1.22.17"), + expectedInfo: coreDNSInfo{ + CurrentMajorMinorPatch: "1.6.2", + FromImageTag: "1.6.2", + TargetMajorMinorPatch: "1.8.0", + FromImage: "k8s.gcr.io/coredns:1.6.2", + ToImage: "registry.k8s.io/coredns/coredns:1.8.0", + ToImageTag: "1.8.0", + }, + }, { name: "rename to coredns/coredns when upgrading to coredns=1.8.0 and kubernetesVersion=1.26.0", + // 1.26.0 uses registry.k8s.io as default registry. Thus the registry doesn't get changed as + // FromImage is already using registry.k8s.io. objs: []client.Object{newCoreDNSInfoDeploymentWithimage("registry.k8s.io/coredns:1.6.2"), cm}, clusterConfig: &bootstrapv1.ClusterConfiguration{ DNS: bootstrapv1.DNS{ @@ -1242,7 +1268,7 @@ func TestGetCoreDNSInfo(t *testing.T) { }, }, }, - kubernetesVersion: semver.MustParse("1.24.0"), + kubernetesVersion: semver.MustParse("1.26.0"), expectedInfo: coreDNSInfo{ CurrentMajorMinorPatch: "1.6.2", FromImageTag: "1.6.2", @@ -1253,7 +1279,9 @@ func TestGetCoreDNSInfo(t *testing.T) { }, }, { - name: "patches ImageRepository to registry.k8s.io if it's set on neither global nor DNS-level and kubernetesVersion >= v1.22 and rename to coredns/coredns", + name: "patches ImageRepository to registry.k8s.io if it's set on neither global nor DNS-level and kubernetesVersion >= v1.22.17 and rename to coredns/coredns", + // 1.22.17 has registry.k8s.io as default registry. Thus the registry gets changed as + // FromImage is using k8s.gcr.io. objs: []client.Object{newCoreDNSInfoDeploymentWithimage(image162), cm}, clusterConfig: &bootstrapv1.ClusterConfiguration{ DNS: bootstrapv1.DNS{ @@ -1262,7 +1290,7 @@ func TestGetCoreDNSInfo(t *testing.T) { }, }, }, - kubernetesVersion: semver.MustParse("1.22.0"), + kubernetesVersion: semver.MustParse("1.22.17"), expectedInfo: coreDNSInfo{ CurrentMajorMinorPatch: "1.6.2", FromImageTag: "1.6.2", diff --git a/controlplane/kubeadm/main.go b/controlplane/kubeadm/main.go index 7d3f36080cdf..3d94d1d010b6 100644 --- a/controlplane/kubeadm/main.go +++ b/controlplane/kubeadm/main.go @@ -129,7 +129,7 @@ func InitFlags(fs *pflag.FlagSet) { fs.StringVar(&watchFilterValue, "watch-filter", "", fmt.Sprintf("Label value that the controller watches to reconcile cluster-api objects. Label key is always %s. If unspecified, the controller watches for all cluster-api objects.", clusterv1.WatchLabel)) - fs.IntVar(&webhookPort, "webhook-port", 9443, + fs.IntVar(&webhookPort, "webhook-port", 0, "Webhook Server port") fs.StringVar(&webhookCertDir, "webhook-cert-dir", "/tmp/k8s-webhook-server/serving-certs/", @@ -221,6 +221,10 @@ func main() { } func setupChecks(mgr ctrl.Manager) { + if webhookPort == 0 { + setupLog.V(0).Info("webhook is disabled skipping webhook healtcheck setup") + return + } if err := mgr.AddReadyzCheck("webhook", mgr.GetWebhookServer().StartedChecker()); err != nil { setupLog.Error(err, "unable to create ready check") os.Exit(1) @@ -233,6 +237,10 @@ func setupChecks(mgr ctrl.Manager) { } func setupReconcilers(ctx context.Context, mgr ctrl.Manager) { + if webhookPort != 0 { + setupLog.V(0).Info("webhook is enabled skipping reconcilers setup") + return + } // Set up a ClusterCacheTracker to provide to controllers // requiring a connection to a remote cluster log := ctrl.Log.WithName("remote").WithName("ClusterCacheTracker") @@ -273,6 +281,11 @@ func setupReconcilers(ctx context.Context, mgr ctrl.Manager) { } func setupWebhooks(mgr ctrl.Manager) { + if webhookPort == 0 { + setupLog.V(0).Info("webhook is disabled skipping webhook setup") + return + } + if err := (&controlplanev1.KubeadmControlPlane{}).SetupWebhookWithManager(mgr); err != nil { setupLog.Error(err, "unable to create webhook", "webhook", "KubeadmControlPlane") os.Exit(1) diff --git a/docs/book/src/clusterctl/commands/init.md b/docs/book/src/clusterctl/commands/init.md index 47f8bda830be..ee4b9501f084 100644 --- a/docs/book/src/clusterctl/commands/init.md +++ b/docs/book/src/clusterctl/commands/init.md @@ -117,6 +117,8 @@ API calls to the GitHub API. It is possible to configure the go proxy url using for go itself (defaults to `https://proxy.golang.org`). To immediately fallback to the GitHub client and not use a go proxy, the environment variable could get set to `GOPROXY=off` or `GOPROXY=direct`. +If a provider does not follow Go's semantic versioning, `clusterctl` may fail when detecting the correct version. +In such cases, disabling the go proxy functionality via `GOPROXY=off` should be considered. See [clusterctl configuration](../configuration.md) for more info about provider repository configurations. @@ -188,7 +190,7 @@ If this happens, there are no guarantees about the proper functioning of `cluste Cluster API providers require a cert-manager version supporting the `cert-manager.io/v1` API to be installed in the cluster. While doing init, clusterctl checks if there is a version of cert-manager already installed. If not, clusterctl will -install a default version (currently cert-manager v1.10.0). See [clusterctl configuration](../configuration.md) for +install a default version (currently cert-manager v1.10.1). See [clusterctl configuration](../configuration.md) for available options to customize this operation.