Skip to content

Commit

Permalink
tutorials review (#814)
Browse files Browse the repository at this point in the history
reviewed tutorials and edited them to make them clearer, fix md linter issues and formatting
  • Loading branch information
nhennigan authored Nov 20, 2024
1 parent 7314175 commit dd2cd3b
Show file tree
Hide file tree
Showing 5 changed files with 72 additions and 60 deletions.
22 changes: 11 additions & 11 deletions docs/src/capi/tutorial/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Cluster provisioning with CAPI and Canonical K8s
# Cluster provisioning with CAPI and {{product}}

This guide covers how to deploy a {{product}} multi-node cluster
using Cluster API (CAPI).
Expand All @@ -16,7 +16,7 @@ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.7.3/
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
```

### Configure clusterctl
## Configure `clusterctl`

`clusterctl` contains a list of default providers. Right now, {{product}} is
not yet part of that list. To make `clusterctl` aware of the new
Expand All @@ -33,10 +33,10 @@ providers:
url: "https://github.com/canonical/cluster-api-k8s/releases/latest/control-plane-components.yaml"
```

### Set up a management cluster
## Set up a management cluster

The management cluster hosts the CAPI providers. You can use Canonical
Kubernetes as a management cluster:
The management cluster hosts the CAPI providers. You can use {{product}} as a
management cluster:

```
sudo snap install k8s --classic --edge
Expand All @@ -50,7 +50,7 @@ When setting up the management cluster, place its kubeconfig under
`~/.kube/config` so other tools such as `clusterctl` can discover and interact
with it.

### Prepare the infrastructure provider
## Prepare the infrastructure provider

Before generating a cluster, you need to configure the infrastructure provider.
Each provider has its own prerequisites. Please follow the instructions
Expand All @@ -68,7 +68,7 @@ chmod +x clusterawsadm
sudo mv clusterawsadm /usr/local/bin
```
`clusterawsadm` helps you bootstrapping the AWS environment that CAPI will use
`clusterawsadm` helps you bootstrapping the AWS environment that CAPI will use.
It will also create the necessary IAM roles for you.
Start by setting up environment variables defining the AWS account to use, if
Expand Down Expand Up @@ -153,7 +153,7 @@ You are now all set to deploy the MAAS CAPI infrastructure provider.
````
`````

### Initialise the management cluster
## Initialise the management cluster

To initialise the management cluster with the latest released version of the
providers and the infrastructure of your choice:
Expand All @@ -162,7 +162,7 @@ providers and the infrastructure of your choice:
clusterctl init --bootstrap ck8s --control-plane ck8s -i <infra-provider-of-choice>
```

### Generate a cluster spec manifest
## Generate a cluster spec manifest

Once the bootstrap and control-plane controllers are up and running, you can
apply the cluster manifests with the specifications of the cluster you want to
Expand Down Expand Up @@ -198,7 +198,7 @@ set the cluster’s properties. Review the available options in the respective
definitions file and edit the cluster manifest (`cluster.yaml` above) to match
your needs.

### Deploy the cluster
## Deploy the cluster

To deploy the cluster, run:

Expand Down Expand Up @@ -237,7 +237,7 @@ You can then see the workload nodes using:
KUBECONFIG=./kubeconfig sudo k8s kubectl get node
```

### Delete the cluster
## Delete the cluster

To delete a cluster:

Expand Down
23 changes: 10 additions & 13 deletions docs/src/charm/tutorial/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ instances and also to integrate other operators to enhance or customise your
Kubernetes deployment. This tutorial will take you through installing
Kubernetes and some common first steps.

## What you will learn
## What will be covered

- How to install {{product}}
- Making a cluster
Expand Down Expand Up @@ -41,7 +41,9 @@ The currently available versions of the charm can be discovered by running:
```
juju info k8s
```

or

```
juju info k8s-worker
```
Expand Down Expand Up @@ -106,13 +108,14 @@ fetched earlier also includes a list of the relations possible, and from this
we can see that the k8s-worker requires "cluster: k8s-cluster".

To connect these charms and effectively add the worker to our cluster, we use
the 'integrate' command, adding the interface we wish to connect
the 'integrate' command, adding the interface we wish to connect.

```
juju integrate k8s k8s-worker:cluster
```

After a short time, the worker node will share information with the control plane and be joined to the cluster.
After a short time, the worker node will share information with the control plane
and be joined to the cluster.

## 4. Scale the cluster (Optional)

Expand Down Expand Up @@ -168,7 +171,8 @@ config file which will just require a bit of editing:
juju run k8s/0 get-kubeconfig >> ~/.kube/config
```

The output includes the root of the YAML, `kubeconfig: |`, so we can just use an editor to remove that line:
The output includes the root of the YAML, `kubeconfig: |`, so we can just use an
editor to remove that line:

```
nano ~/.kube/config
Expand All @@ -189,6 +193,7 @@ kubectl config show
```

...which should output something like this:

```
apiVersion: v1
clusters:
Expand Down Expand Up @@ -217,15 +222,7 @@ running a simple command such as :
kubectl get pods -A
```

This should return some pods, confirming the command can reach the cluster:

```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-4m5xj 1/1 Running 0 35m
kube-system cilium-operator-5ff9ddcfdb-b6qxm 1/1 Running 0 35m
kube-system coredns-7d4dffcffd-tvs6v 1/1 Running 0 35m
kube-system metrics-server-6f66c6cc48-wdxxk 1/1 Running 0 35m
```
This should return some pods, confirming the command can reach the cluster.

## Next steps

Expand Down
28 changes: 16 additions & 12 deletions docs/src/snap/tutorial/add-remove-nodes.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Adding and Removing Nodes
# Adding and removing nodes

Typical production clusters are hosted across multiple data centres and cloud
environments, enabling them to leverage geographical distribution for improved
Expand Down Expand Up @@ -56,14 +56,15 @@ sudo snap install --classic --edge k8s
### 2. Bootstrap your control plane node

<!-- markdownlint-restore -->
Bootstrap the control plane node:
Bootstrap the control plane node with default configuration:

```
sudo k8s bootstrap
```

{{product}} allows you to create two types of nodes: control plane and
worker nodes. In this example, we're creating a worker node.
worker nodes. In this example, we just initialised a control plane node, now
let's create a worker node.

Generate the token required for the worker node to join the cluster by executing
the following command on the control-plane node:
Expand All @@ -72,6 +73,9 @@ the following command on the control-plane node:
sudo k8s get-join-token worker --worker
```

`worker` refers to the name of the node we want to join. `--worker` is the type
of node we want to join.

A base64 token will be printed to your terminal. Keep it handy as you will need
it for the next step.

Expand All @@ -81,36 +85,36 @@ it for the next step.

### 3. Join the cluster on the worker node

To join the worker node to the cluster, run:
To join the worker node to the cluster, run on worker node:

```
sudo k8s join-cluster <join-token>
```

After a few seconds, you should see: `Joined the cluster.`
After a few seconds, you should see: `Joined the cluster.`

### 4. View the status of your cluster

To see what we've accomplished in this tutorial:
Let's review what we've accomplished in this tutorial.

If you created a control plane node, check that it joined successfully:
To see the control plane node created:

```
sudo k8s status
```

If you created a worker node, verify with this command:
Verify the worker node joined successfully with this command
on control-plane node:

```
sudo k8s kubectl get nodes
```

You should see that you've successfully added a worker or control plane node to
your cluster.
You should see that you've successfully added a worker to your cluster.

Congratulations!

### 4. Remove Nodes and delete the VMs (Optional)
### 4. Remove nodes and delete the VMs (Optional)

It is important to clean-up your nodes before tearing down the VMs.

Expand Down Expand Up @@ -139,7 +143,7 @@ multipass delete worker
multipass purge
```

## Next Steps
## Next steps

- Discover how to enable and configure Ingress resources [Ingress][Ingress]
- Learn more about {{product}} with kubectl [How to use
Expand Down
39 changes: 21 additions & 18 deletions docs/src/snap/tutorial/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,22 +19,24 @@ Install the {{product}} snap with:
sudo snap install k8s --edge --classic
```

### 2. Bootstrap a Kubernetes Cluster
### 2. Bootstrap a Kubernetes cluster

Bootstrap a Kubernetes cluster with default configuration using:
The bootstrap command initialises your cluster and configures your host system
as a Kubernetes node. If you would like to bootstrap a Kubernetes cluster with
default configuration run:

```
sudo k8s bootstrap
```

This command initialises your cluster and configures your host system
as a Kubernetes node.
For custom configurations, you can explore additional options using:

```
sudo k8s bootstrap --help
```

Bootstrapping the cluster can only be done once.

### 3. Check cluster status

To confirm the installation was successful and your node is ready you
Expand All @@ -44,26 +46,27 @@ should run:
sudo k8s status
```

It may take a few moments for the cluster to be ready. Confirm that {{product}}
has transitioned to the `cluster status ready` state by running:

```
sudo k8s status --wait-ready
```

Run the following command to list all the pods in the `kube-system`
namespace:

```
sudo k8s kubectl get pods -n kube-system
```

You will observe at least three pods running:
You will observe at least three pods running. The functions of these three pods
are:

- **CoreDNS**: Provides DNS resolution services.
- **Network operator**: Manages the life-cycle of the networking solution.
- **Network agent**: Facilitates network management.

Confirm that {{product}} has transitioned to the `k8s is ready` state
by running:

```
sudo k8s status --wait-ready
```

### 5. Access Kubernetes

The standard tool for deploying and managing workloads on Kubernetes
Expand Down Expand Up @@ -124,7 +127,7 @@ running:
sudo k8s kubectl get pods
```

### 8. Enable Local Storage
### 8. Enable local storage

In scenarios where you need to preserve application data beyond the
life-cycle of the pod, Kubernetes provides persistent volumes.
Expand Down Expand Up @@ -166,7 +169,7 @@ You can inspect the storage-writer-pod with:
sudo k8s kubectl describe pod storage-writer-pod
```

### 9. Disable Local Storage
### 9. Disable local storage

Begin by removing the pod along with the persistent volume claim:

Expand Down Expand Up @@ -201,20 +204,20 @@ sudo snap remove k8s --purge

This option ensures complete removal of the snap and its associated data.

## Next Steps
## Next steps

- Learn more about {{product}} with kubectl: [How to use kubectl]
- Explore Kubernetes commands with our [Command Reference Guide]
- Learn how to set up a multi-node environment [Setting up a K8s cluster]
- Learn how to set up a multi-node environment by [Adding and Removing Nodes]
- Configure storage options: [Storage]
- Master Kubernetes networking concepts: [Networking]
- Discover how to enable and configure Ingress resources [Ingress]
- Discover how to enable and configure Ingress resources: [Ingress]

<!-- LINKS -->

[How to use kubectl]: kubectl
[Command Reference Guide]: ../reference/commands
[Setting up a K8s cluster]: add-remove-nodes
[Adding and Removing Nodes]: add-remove-nodes
[Storage]: ../howto/storage/index
[Networking]: ../howto/networking/index.md
[Ingress]: ../howto/networking/default-ingress.md
Loading

0 comments on commit dd2cd3b

Please sign in to comment.