This lab will walk you through creating a basic microk8s cluster on Ubuntu 20.04, configuring a dynamic NFS storage provider, and creating and exposing a static web server.
- At least one kubernetes node running Ubuntu 20.04 LTS, updated fully (
sudo apt update && sudo apt upgrade
) - An NFS server and export configured to use as storage for your Kubernetes podes, preferably on a seperate machine. There are many guides on the internet for this, including this one: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-20-04
On ONE of your cluster members, perform the following:
-
Configure name resolution for so each cluster member can reach the others by edit hosts file to include name and IP of each cluster member
sudo nano /etc/hosts
-
Install NFS
sudo apt install nfs-common
-
Install microk8s on first cluster member (see ):
sudo snap install microk8s --classic
-
Add your user to the microk8s group so you dont have to sudo for everything and take ownership of the kubectl configuration file:
sudo usermod -a -G microk8s <YOUR USER NAME>
sudo chown -f -R <YOUR USER NAME> ~/.kube
-
Make your life easier by aliasing the "microk8s kubectl" and "microk8s helm3" commands to the more standard "kubectl" and "helm":
nano .bashrc
Add
alias kubectl='microk8s kubectl'
andalias helm='microk8s helm3'
to the bottom of the file (as seperate lines) -
Logout or disconnect your SSH session and log back in to load the aliases and group membership you just configured.
See https://microk8s.io/docs/getting-started for more information
-
Repeat the basic node setup steps on the node you wish to join to the cluster
-
On the first node you configured (we will call this the "master node"), enter the following:
microk8s add-node
Copy the command/output provided
-
Run the command from the above step on the node you wish to join to the cluster
-
Verify that the node shows up from the master node:
kubectl get nodes
Note: You can repeat these steps for as many nodes as you would like to join
Helm is like apt or yum, but for Kubernetes - it makes your life easier but its good to understand what is happening under the covers as well.
-
From the master node, enable helm:
microk8s enable helm3
This will allow for pods (containers or groups of containers) to move between the nodes in your cluster seamlessly while keeping access to the same storage.
On the master node:
-
Use helm to easily configure the NFS persistent storage and set it as the default for pods to use:
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ $ helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=<YOUR NFS SERVER IP> \ --set nfs.path=/exported/path \ --set storageClass.defaultClass=true
-
Verify that the 'nfs-client' storage class shows up:
kubectl get storageclass
-
Clone the lab respository
git clone https://github.com/terratrax/k8s-bootcamp
-
Change directory into the yaml folder
cd k8s-bootcamp/yaml
-
Inspect the pvc-helloworld.yaml file to see that it creates a volume claim, which in turn will will be mapped to a new folder on your NFS server.
-
Create the volume claim:
kubectl apply -f pvc-helloworld.yaml
-
Take note of the folder created on your NFS server
-
Inspect the deployment-helloworld.yaml file to see that it creates a container running the nginx web server and mounts the NFS volume to the document root
-
Apply the helloworld deployment:
kubectl apply -f deployment-helloworld.yaml
-
Inspect the service-helloworld.yaml file to see that it maps port 80 on the container to port 31080 on the nodes. Kubernetes will automatically map that port on ANY node to port 80 on the node(s) running the application.
-
Apply the service:
kubectl apply -f service-helloworld.yaml
-
View the services running on your cluster:
kubectl get svc
-
Note the mapping between port 80 on your container and port 31080 on the node (virtual machine)
-
On your NFS server, add static content to the root directory of the folder the NFS provisioner created. A file called "index.html" with "Hello world" in it will suffice! Alternatively, you can create the content from within the container (see below)
-
From your local network, browse to http://:31080
Note: Regardless of what node (VM) the container is running on, it is reachable from all nodes on port 31080
-
Drop into a bash shell on the container:
Find the name of the running container with
kubectl get pods
Drop into a bash shell with
kubectl exec -it <CONTAINER NAME> -- bash
-
Create a basic static web page:
echo "HELLO WORLD" > /usr/share/nginx/html/index.html
Note: You'll see the file just created on your NFS server!' -
Scale the application to more than one pod by increasing the spec.replicas value in the deployment-helloworld.yaml file.