This is a series of hands-on exercices that will help you get into Kubernetes (known by the shorthand 'k8s') and start working on NGINX Ingress Controller.
Of course, if there is a challenge, there will be a winner (I did not mention anything about a prize though)! Please go and create a user account on Capture the K8S Flag and start playing.
Don't worry, you are not alone:
- cheat-sheet: https://github.com/fchmainy/k8s-trainings-101/raw/main/doc/k8s-101-cheatsheet.pdf
- flow diagram: https://github.com/fchmainy/k8s-trainings-101/raw/main/doc/tshoot%20k8s%20pod%20deployment.pdf
For every lab you will find a series of questions that will reward you with points for the correct answer. You can ask for hints, but this will cost you points!
Table of Contents:
Lab0 - Get familiar with Docker Engine and build your first application
Lab1 - Get familiar with your k8S Cluster
Lab2 - Deploy your first application
Lab3 - Make your application accessible from the outside
Lab4 - Publish your application with Ingress
Optional Lab5 - Deploy a new version of the app and manage versioning with Ingress
Optional Lab6 - East-West or Microservice-to-Microservice traffic
Please install on student machines:
- Docker desktop
- Helm
- Git CLI
For Docker Desktop, be sure to enable kubernetes inside docker desktop preferences
In case a student can't install the pre-requisites, there is an UDF BP : https://udf.f5.com/b/8c967d89-dcb3-4788-b41c-1e6a066d3ad5#documentation
In this section, we will learn the 3 most important Docker commands in order to build your container image and push this image to your private repository. The repository can be a local registry running in your laptop, a github.com registry (public or private), gitlab.com registry or any other that can be accessed from the lab environment.
docker login [OPTIONS] [SERVER]
docker build
docker push
basic git commands
1. Prepare your Gitlab. If you don't have a gitlab.com (free) account, please create one (if you have a github.com account already, you can use that to login to gitlab.com). We will use gitlab.com as a Source Code Management tool, but mostly, as a private container registry. Once your account is created:
- create a new project
- create a new Deployment token Username and Password (the screen shots below show the steps). Keep the credentials safe, because we will use them in the remaining labs:
- go to container registry (Package & Registry > Container Registry). You should find a button that outputs the CLI Commands that allow docker to login to your registry, along with the two needed commands to build and push your container image into your registry. We will use these commands shortly. For example, your login command will look something like this:
docker login registry.gitlab.com -u yourDeployTokenUsername
- Now, let's download the application code. Then build the container image and push your new image to your repository. Make sure you modify the two docker command examples provided below, to reference your repo, and to append the correct tag and version at the end of them. Gitlab output these for you previously.
git clone https://github.com/fchmainy/k8s-trainings-101.git cd k8s-trainings-101/v1/ docker build -t registry.gitlab.com/YourUser/YourRepo/webapp:v1 . docker push registry.gitlab.com/YourUser/YourRepo/webapp:v1
- Finally, verify that the webapp container image is in your registry. Be sure to not only check the image name, but also the image "tag". If you have not correctly tagged the image, you will not see "v1" but "nothing" or "latest". In that case, double check how you flagged the image and correct the issue.
The goal of this lab is just to gain an understanding of the main components of a K8S Cluster such as node types, basic networking, the meaning and relationship between Services, Endpoints and Pods. Use the following commands to output information about your k8s cluster.
kubectl cluster-info
kubectl get nodes
kubectl get namespaces
kubectl get service -n *namespace*
kubectl get endpoints -n *namespace*
kubectl get pods -n *namespace*
Just poke around using the commands above, to understand how the various constructs and components relate to eachother.
⚠️ Don't forget to check CTFD to see if there are any challenges or questions for this section.
The goal of this lab, is to create a namespace (check k8s documentation for more details on namespaces), store your credentials safely and deploy your application container image into Kubernetes.
kubectl create ns kubectl create secret kubectl apply
1. Modify the YAML Template. We have prepared an example YAML template file for the Kubernetes deployment manifest, in order to help you create the service and the deployment. Please remember to modify the example so that it matches your requirements. 2. Deploy your App. Now you can prepare for, and deploy your application into your kubernetes cluster:
- create a namespace called frontns
- create a docker-registry kubernetes secret on registry.gitlab.com using your deploy tokens.
- deploy the v1_webapp_k8s_manifest.yaml in your frontns namespace (verify the manifest file content so it matches your environment).
⚠️ Don't forget to check CTFD to see if there are any challenges or questions for this section.
The goal of this lab is to:
- Understand how your application works
- Understand how to see the application output, while it is still isolated insde Kubernetes.
- Expose your application to the outside.
- Find the instructor container registry Deploy Token Username and Password, to gain access to the NGINX image, stored in the instructor private registry.
- Deploy the NGINX Ingress Controller and create the Ingress Resource.
kubectl port-forward
1. Understand how your application works.:
❯ kubectl get svc -n frontns -l tier=front -l version=v1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE webapp ClusterIP 10.1.120.33 none 80/TCP 11m ❯ kubectl describe svc webapp -n frontns Name: webapp Namespace: frontns Labels: tier=front version=v1 Annotations: Selector: app=webapp,version=v1 Type: ClusterIP IP: 10.1.120.33 Port: 80/TCP TargetPort: 80/TCP Endpoints: 10.1.96.49:80 Session Affinity: None Events: ❯ kubectl get ep -n frontns NAME ENDPOINTS AGE webapp 10.1.96.49:80 13m ❯ kubectl get pods -n frontns -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webapp-7dd5ff6788-t8xdt 1/1 Running 0 11m 10.1.96.49 vmss000000
2. Debug your App. Even though your application is isolated within your Kubernetes cluster, without any external access configured, you can still check if the application is working by running a debug networking pod (praqma/multitool). This gives you a 'jumphost' container inside Kubernetes:
kubectl create ns debug kubectl run multitool --image=praqma/network-multitool -n debug kubectl exec -it multitool -n debug -- sh bash-5.0# curl webapp.frontns -v * Trying 10.1.120.33:80... * Connected to webapp.frontns (10.1.120.33) port 80 (#0)
3. Expose your App. The next step is to expose your application to the outside world. There are many ways to do this, first and foremost the port-forward which is mainly used for troubleshooting, as it is not permanent. Port Forwarding can be applied to the Service, Deployment, Pods... it really depends what you want to debug. In this example, we will expose the Service (remember to check for corresponding flags on CTFD):
- create a port-forward to your **webapp deployment** redirecting TCP port 5000 to TCP/80. - curl http://127.0.0.1:5000
- You should now be able to access your webapp (v1) web page:
- You will find, on the presented V1 web page, the instructor gitlab Deploy Token Username and Password. Keep these credentials safe, you will need them in the next lab.
Note: You can also expose your application using the expose service object and access your application via a NodePort. This is a permanent change (until you explicitly remove it) so we won't use it here as we prefer using an Ingress service. Expose is presented here: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
⚠️ Don't forget to check CTFD to see if there are any challenges or questions for this section.
The goal of this lab is to create an Ingress Resource to access the webapp (v1) application. Before we can do this, we need to install NGINX Ingress Controller into our Kubernetes cluster.
In the real world, the testing, validation, building and release of an application should be automated as part of a CI/CD pipeline. We are going to manually step through part of this process, to understand the advanced routing capabilities of our NGINX Ingress services in delivering applications that are deployed into Kubernetes.
⚠️ Don't forget to check CTFD to see if there are any challenges or questions for this section.
The most common method used to access Kubernetes services from the outside world, is deployment of an Ingress resource. To do this, we first need to install an Ingress Controller. We have several options for Ingress Controller, but we will use NGINX for this lab.
1. Preparation. Using the instructor private registry deployment token username and password:
- create a namespace called ingress
- create a docker-registry secret in the ingress namespace using the instructor deploy username/password tokens.
2. Get the Helm Chart for NGINX. There are multiple ways we can install the NGINX Kubernetes Ingress Controller:
- In this lab we will use the Helm deployment mode, as it is the simplest way to install all the components (Service Accounts, CRDs, etc.) required for a complex deployment. Please type the following commands to install the latest NGINX Ingress Helm chart.
helm repo list Error: no repositories to show helm repo add nginx-stable https://helm.nginx.com/stable "nginx-stable" has been added to your repositories helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈ helm repo list NAME URL nginx-stable https://helm.nginx.com/stable
3. Deploy NGINX Ingress Controller using Helm. Now, we are going to deploy the NGINX Plus Ingress and all of the required components in a single, multi-line command:
helm install nginx-ingress nginx-stable/nginx-ingress \ --namespace ingress \ --set controller.kind=deployment \ --set controller.replicaCount=2 \ --set controller.nginxplus=true \ --set controller.appprotect.enable=true \ --set controller.image.repository=registry.gitlab.com/f.chmainy/nginx \ --set controller.image.tag=v1.10.0 \ --set controller.service.type=NodePort \ --set controller.service.httpPort.nodePort=30274 \ --set controller.service.httpsPort.nodePort=30275 \ --set controller.serviceAccount.imagePullSecretName=regcred \ --set controller.ingressClass=ingressclass1
4. Create Ingress Resource.. Now that you have installed the Ingress Controller into the ingress namespace, you can move on to deploy the Ingress Resource to access your application. Ingress Resources must be deployed into the application namespaces. For reference, there is a great example on the official NGINX INC Github Repository
- In the web application namespace (frontns), you should now deploy the Ingress Resource matching the Ingress Class specified when you deployed the Ingress Controller:
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: k8s101ingress spec: ingressClassName: ingressclass1 host: www.mycompany.com upstreams: - name: v1 service: webapp port: 80 routes: - path: / action: proxy: upstream: v1
- You should now be able to access your v1 application using your web browser at http://www.mycompany.com:30274
⚠️ Ingress is a 'shared' model, and it is therefore expected to provide access to many applications. The Ingress Resource 'spec' you deployed instructs NGINX to use the Host header 'www.mycompany.com' to identify requests for your application requests and steers them to your webapp containers. This means that any request to the Ingress must include the correct Host Header.
Note: if you are using the UDF blueprint, the www.mycompany.com:30274 fqdn should already be registered in the jumpHost hosts file.
The goal of this lab is to deploy version 2 (v2) of the web application, and then configure Ingress to provide access to it. You will use two different deployment Strategies:
- Canary Realease
- A/B Testing
There are multiple deployment strategies to choose, when releasing a new application version:
- A/B Testing
- Canary Testing
- Blue/Green
They are all variations on the same theme, which is to test the new version of the application, with a subset of users, and continue to send all other users to the oiriginal version. The aim is to minimise the risk of testing the new version of the application, by only exposing a small number of users to it.
The decision on which strategy to employ comes down to what you want to achieve and who/what is testing the application. For reference, you can find some external documentation on this subject here: https://docs.flagger.app/usage/deployment-strategies
1. Build and deploy the version 2 (v2) of your application. A new version of your application has been developed and is ready to be released.
- First, you must build and push the v2 front container image into your private container image registry.
cd ../v2/front/ docker build -t registry.gitlab.com/f.chmainy/toremove/webapp:v2 . docker push registry.gitlab.com/f.chmainy/toremove/webapp:v2
- Then, you can use the kubernetes manifest of the application to deploy the Ingress Resource.
kubectl apply -f v2/front/v2_webapp_k8s_manifest.yaml -n frontns service/webappi-v2-svc configured deployment.apps/webapp-v2-dep configured
Note: In a large scale cluster, you probably won't have a clear mapping of services names, deployments, endpoints and pods, this is why labels could be very useful:
kubectl get svc --all-namespaces -l application=k8s101,version=v1,tier=front NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontns webapp ClusterIP 10.103.125.244 80/TCP 53m kubectl get svc --all-namespaces -l application=k8s101,version=v2,tier=front NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontns webappi-v2-svc ClusterIP 10.110.131.55 80/TCP 56m
2. Configure Ingress with A/B Testing. Here, we want to split part of the traffic (%) to the new version so we can validate and measure the proper function of the new version without impacting too many customers if there were any issues with the code.
- Here we are doing a 80% to v1 and 20% to v2, in real life the cursor would be progressively moving out to v2 until final approval. Deploy an updated A/B Ingress Resource:
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: k8s101ingress spec: ingressClassName: ingressclass1 host: www.mycompany.com upstreams: - name: v1 service: webapp port: 80 - name: v2 service: webapp-v2-svc port: 80 routes: - path: / splits: - weight: 80 action: pass: v1 - weight: 20 action: pass: v2
3. Configure Ingress with Canary testing. In this scenario, we are only steering specific key users (dev, test users, for example) to the new version of the application, by detecting the presence of a specific header or cookie. Deploy an updated Canary Ingress Resource:
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: k8s101ingress spec: ingressClassName: ingressclass1 host: www.mycompany.com upstreams: - name: v1 service: webapp port: 80 - name: v2 service: webapp-v2-svc port: 80 routes: - path: / matches: - conditions: - cookie: flag6 value: COOKIE_VALUE6 action: pass: v2 action: pass: v1
- Using chrome and go to the Developer tools / Console, you can inject the required cookie:
document.cookie="flag6=COOKIE_VALUE6; expires=Mon, 2 Aug 2021 20:20:20 UTC; path=/";
- Now we can redirect the whole Ingress traffic to the v2 frontend, remove the v1 webapp Ingress rules and remove the v1 application:
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: k8s101ingress spec: ingressClassName: ingressclass1 host: www.mycompany.com upstreams: - name: v2 service: webapp-v2-svc port: 80 routes: - path: / action: proxy: upstream: v2
Note: For reference, there are many examnples of advanced routing here: https://github.com/nginxinc/kubernetes-ingress/tree/master/examples-of-custom-resources
The goal of this lab is to deploy the backend service, and then access your application to capture the flag!!!
1. Deploy the Back-End. The backend is a very basic JSON RESTFUL API service that delivers an UUID based on a cookie provided by the frontend.
- create a new namespace called backendns where the backend pod will reside.
- build the container image from the provided Dockerfile and push it to your private container registry.
- deploy the new application service (service + deployment) to the backendns namespace.
2. Check the application. Access the v2 application and try accessing the application by inserting a cookie in your web browser:
document.cookie="flag6=COOKIE_VALUE8; expires=Mon, 2 Aug 2021 20:20:20 UTC; path=/";
You will find the CTF flag in the response page.
...then you win!!!