Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed layout issue in the consolidation section #277

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 15 additions & 3 deletions content/karpenter/050_karpenter/consolidation.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ draft: false

{{% notice info %}}
Please note: this EKS and Karpenter workshop version is now deprecated since the launch of Karpenter v1beta, and has been updated to a new home on AWS Workshop Studio here: **[Karpenter: Amazon EKS Best Practice and Cloud Cost Optimization](https://catalog.us-east-1.prod.workshops.aws/workshops/f6b4587e-b8a5-4a43-be87-26bd85a70aba)**.

This workshop remains here for reference to those who have used this workshop before, or those who want to reference this workshop for running Karpenter on version v1alpha5.
{{% /notice %}}

Expand Down Expand Up @@ -133,7 +132,9 @@ The log does also show something interesting lines that refer to `Cordon` and `D
{{% /expand %}}

#### 2) What should happen when we move to just 3 replicas ?
{{%expand "Click here to show the answer" %}}

{{%expand "Click here to show the answer" %}}

To scale up the deployment run the following command:

```
Expand All @@ -151,11 +152,12 @@ Karpenter logs will show the sequence of events on the output, similar to the on
2022-09-05T08:37:38.649Z INFO controller.termination Deleted node {"commit": "b157d45", "node": "ip-192-168-15-83.eu-west-1.compute.internal"}
```


{{% /expand %}}

#### 3) Increase the replicas to 10. What will happen if we change the provisioner to support both `on-demand` and `spot` ?

{{%expand "Click here to show the answer" %}}

To scale up the deployment bach to 10 ,run the following command:

```
Expand All @@ -165,6 +167,7 @@ kubectl scale deployment inflate --replicas 10
There should not be any surprise here, like in previous cases a new **2xlarge** node might be selected to place the extra 7 pods. Note, the provisioned instance type can depend on available spot capacity.

To apply the change to the provisioner, we will re-deploy the default provisioner, this time using both `on-demand` and `spot`. Run the following command

```
cat <<EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1alpha5
Expand Down Expand Up @@ -225,7 +228,10 @@ Karpenter logs should show a sequence of events similar to the one below.
2022-09-05T08:57:06.000Z INFO controller.termination Deleted node {"commit": "b157d45", "node": "ip-192-168-8-39.eu-west-1.compute.internal"}
```

{{% /expand %}}

#### 4) Scale the `inflate` service to 3 replicas, what should happen ?

{{%expand "Click here to show the answer" %}}

Run the following command to set the number of replicas to 3.
Expand All @@ -237,9 +243,11 @@ kubectl scale deployment inflate --replicas 3
For spot nodes, Karpenter only uses the **Deletion** consolidation mechanism. It will not replace a spot node with a cheaper spot node. Spot instance types are selected with the `price-capacity-optimized` strategy and often the cheapest spot instance type is not launched due to the likelihood of interruption. Consolidation would then replace the spot instance with a cheaper instance negating the `price-capacity-optimized` strategy entirely and increasing interruption rate.

Effectively no changes will happen at this stage with your cluster.

{{% /expand %}}

#### 5) What other scenarios could prevent **Consolidation** events in your cluster ?

{{%expand "Click here to show the answer" %}}

There are a few cases where requesting to deprovision a Karpenter node will not work. These include **Pod Disruption Budgets** and pods that have the **do-not-evict** annotation set.
Expand All @@ -254,16 +262,20 @@ There are other cases that Karpenter will consider when consolidating. Consolida
* or some other scheduling restriction that couldn’t be fulfilled.

Finally, Karpenter consolidation will not attempt to consolidate a node that is running pods that are not owned by a controller (e.g. a ReplicaSet). In general we cannot assume that these pods would be recreated if they were evicted from the node that they are currently running on.

{{% /expand %}}

#### 6) Scale the replicas to 0.

{{%expand "Click here to show the answer" %}}

In preparation for the next section, scale replicas to 0 using the following command.

```
kubectl scale deployment inflate --replicas 0
```

{{% /expand %}}

## What Have we learned in this section:

Expand Down