Skip to content

Commit

Permalink
docs: repave clarifications (#1548) (#1582)
Browse files Browse the repository at this point in the history
* Repave clarifications

* Address feedback on repave

* Apply suggestions from code review

Co-authored-by: Rita Watson <[email protected]>

---------

Co-authored-by: Karl Cardenas <[email protected]>
Co-authored-by: Rita Watson <[email protected]>
(cherry picked from commit 3657d0f)

Co-authored-by: Romain Decker <[email protected]>
  • Loading branch information
1 parent 18066e1 commit cf8bd6c
Show file tree
Hide file tree
Showing 5 changed files with 26 additions and 6 deletions.
17 changes: 15 additions & 2 deletions .github/workflows/backport.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,23 @@ jobs:
|| (github.event.action == 'closed')
)
steps:

- name: Retrieve Credentials
id: import-secrets
uses: hashicorp/[email protected]
with:
url: https://vault.prism.spectrocloud.com
method: approle
roleId: ${{ secrets.VAULT_ROLE_ID }}
secretId: ${{ secrets.VAULT_SECRET_ID }}
secrets: /providers/github/organizations/spectrocloud/token?org_name=spectrocloud token | VAULT_GITHUB_TOKEN

- name: Backport Action
uses: sqren/backport-github-action@v8.9.3
uses: sqren/backport-github-action@v9.3.0-a
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
# We are using a PAT token through our Vault Operator to address the issue of PRs workflows not being triggered
# Refer to issue https://github.com/sqren/backport-github-action/issues/79 for more details.
github_token: ${{ steps.import-secrets.outputs.VAULT_GITHUB_TOKEN }}
auto_backport_label_prefix: backport-
add_original_reviewers: true

Expand Down
1 change: 0 additions & 1 deletion .github/workflows/version-branch-update.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ jobs:

- run: npm ci


- name: compile
run: |
make build
Expand Down
11 changes: 9 additions & 2 deletions docs/docs-content/clusters/cluster-management/node-pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ sidebar_position: 190
tags: ["clusters", "cluster management"]
---

A node pool is a group of nodes within a cluster that all have the same configuration. Node pools allow you to create pools of nodes that can be used for different workloads. For example, you can create a node pool for your production workloads and another node pool for your development workloads. You can update node pools for active clusters or create a new node pool for the cluster.
A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for different workloads. For example, you can create a node pool for your production workloads and another for your development workloads. You can update node pools for active clusters or create a new one for the cluster.


:::caution
Expand All @@ -20,8 +20,15 @@ Ensure you exercise caution when modifying node pools. We recommend creating a [

In Kubernetes, the term "repave" refers to the process of replacing a node with a new node. [Repaving](../../glossary-all.md#repavement) is a common practice in Kubernetes to ensure that nodes are deployed with the latest version of the operating system and Kubernetes. Repaving is also used to replace nodes that are unhealthy or have failed. You can configure the repave time interval for a node pool.

The ability to configure the repave time interval for all node pools except the master pool. The default repave time interval is 0 seconds. You can configure the node repave time interval during the cluster creation process or after the cluster is created. To modify the repave time interval after the cluster is created, follow the [Change a Node Pool](#edit-node-pool) instructions below.
Different types of repaving operations may occur, depending on what causes them:

* **Control plane repave**: This takes place when certain changes are made to the Kubernetes configuration, such as changing the **apiServer** specification. This type of repave also occurs when there are changes in the hardware specifications of the control plane nodes, such as during a node scale-up operation or when changing from one instance type to another. Control plane nodes are repaved sequentially.

* **Worker node pool repave**: This happens when changes to a node pool's specifications cause the the existing nodes to become incompatible with the pool's specified criteria. For instance, changing the hardware specifications of a worker pool. Nodes within the affected pool are sequentially replaced with new nodes that meet the updated specifications.

* **Full cluster repave**: This occurs if changes are made to the Operating System (OS) layer or if modifications to the Kubernetes layer impact all nodes, such as when upgrading to a different Kubernetes version. All nodes across all pools are sequentially repaved starting with the control plane.

You can customize the repave time interval for all node pools except the master pool. The default repave time interval is 0 seconds. You can adjust the node repave time interval during or after cluster creation. If you need to modify the repave time interval post-cluster creation, follow the [Change a Node Pool](#change-a-node-pool) instructions below.

## Node Pool Configuration Settings

Expand Down
2 changes: 1 addition & 1 deletion docs/docs-content/glossary-all.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Palette maintains a public pack registry containing various [packs](#pack) that

## Repavement

Repavement is the process of replacing a Kubernetes node with a new one. This is typically done when a node is unhealthy or needs to be upgraded. The process involves draining the node, or in other words, migrating active workloads to another healthy node, and removing it from the cluster. A new node is created and configured with the same settings as the old node and added back to the cluster. The process is fully automated and does not require any manual intervention.
Repavement is the process of replacing a Kubernetes node with a new one. This is typically done when a node is unhealthy or needs to be upgraded. The process involves migrating active workloads to another healthy node, and removing it from the [node pool](clusters/cluster-management/node-pool.md#repave-behavior-and-configuration). This is referred to as draining the node. A new node is created and configured with the same settings as the old node and added back to the pool. The process is fully automated and does not require manual intervention.

## Role

Expand Down
1 change: 1 addition & 0 deletions docs/docs-content/troubleshooting/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ This page covers common debugging scenarios for nodes and clusters after they ha
## Scenario - Repaved Nodes

Palette performs a rolling upgrade on nodes when it detects a change in the `kubeadm` config. Below are some actions that cause the `kubeadm` configuration to change and result in nodes being upgraded:

* OS layer changes
* Kubernetes layer changes
* Kubernetes version upgrade
Expand Down

0 comments on commit cf8bd6c

Please sign in to comment.