Skip to content

Latest commit

 

History

History
107 lines (77 loc) · 6.36 KB

CLUSTER-SETUP.md

File metadata and controls

107 lines (77 loc) · 6.36 KB

Workbench Helm Chart: Kubernetes Cluster Setup

You will need the following resources:

  • Kubernetes Cluster (either single or multi-node) with kubectl and helm (v3) to talk to your cluster
  • At least one StorageClass / Volume Provisioner configured in your cluster
  • A valid wildcard TLS secret for your desired domain (e.g. *.mydomain.ndslabs.org) in your cluster
  • The NGINX Ingress controller installed in your cluster (pointed at your default TLS certificate)

Setup: Kubernetes Cluster

You will need a Kubernetes Cluster (either single or multi-node) with kubectl and helm (v3) to talk to the cluster.

Running locally (developer only)

Several options exist for running a Kubernetes cluster locally:

  • Kubernetes under Docker for MacOSX/Windows - you can enable Kubernetes in the Settings and install the helm client
  • minikube - you will need to set up a custom domain pointing to your minikube ip

Remote VM Single-node

For a short 3-step process for setting this up on a remote VM (where minikube may be unavailable), check out our fork of Data8's kubeadm-bootstrap This will install Kubernetes via kubeadm and configure it to run as a single node cluster. It will also deploy the NGINX Ingress controller to the cluster, allowing you to skip the steps for deploying it manually (provided below).

Manually Scaling Up Additional Nodes

Once the script has finished running, it should output instructions for how to join more nodes to this cluster.

To manually add a node to the cluster, copy and paste the kubeadm join ..... command from your first node's console to another kubeadm-enabled VM and kubeadm will set up that VM as a worker node in your cluster.

Remote VM Multi-node via Terraform

For a more robust Workbench cluster the spans multiple OpenStack VMs, you can use our kubeadm-terraform plan to spin up a cluster of your desired size and scale.

StorageClass / Volume Provisioner

At least one StorageClass needs to be configured in your cluster as the default. GKE and AWS EKS will provide these by default, but OpenStack does not offer the same out of the box. If you have already chosen and configured a StorageClass that can privision PersistentVolume (PVs) for you, then you can skip this step.

To check the StorageClasses in your cluster:

$ kubectl get storageclass
NAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
hostpath (default)   docker.io/hostpath   Delete          Immediate           false                  32d

If you don't see any listed with (default), you'll need to set one as the default by following these steps.

If you do not see any StorageClass listed, then you'll need to set one up first (see below).

Setup: NFS Server Provisioner

While there are many volume provisioners available, we tend to use the NFS Server Provisioner to provision volumes for Workbench.

To use the NFS Server Provisioner in your cluster, run the following commands:

$ kubectl apply -f https://raw.githubusercontent.com/nds-org/kubeadm-terraform/master/assets/nfs/storageclass.yaml
$ kubectl apply -f https://raw.githubusercontent.com/nds-org/kubeadm-terraform/master/assets/nfs/rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/nds-org/kubeadm-terraform/master/assets/nfs/deployment.yaml

This will run the NFS Server Provisioner on all nodes that have a label matching external-storage=true. To use this provisioner, you will need to manually label at least one node with a persistent disk attached.

To apply this label to a node, run the following command:

$ kubectl label NODENAME external-storage=true

After the NFS provisioner comes online, the provisioner will create a PV for any PVCs that are requested.

You can test that it's working by creating a test PVC:

$ kubectl apply -f https://raw.githubusercontent.com/nds-org/kubeadm-terraform/master/assets/nfs/test.pvc.yaml

This will create an empty test PersistentVolumeClaim (PVC) in your cluster, which the provisioner will see and bind a PersistentVolume to the claim. Once provisioned, kubectl get pvc should tell you that the STATUS of the PVC will change to Bound.

Wildcard TLS Secret

You will need a valid wildcard TLS certificate for your chosen Workbench domain. If you have already created a valid wildcard TLS secret, skip this step.

If you already have valid certificate and private key files for your wildcard domain, then you can create a secret from them using the following command:

$ kubectl create secret tls --namespace=default ndslabs-tls \
    --cert=path/to/cert/file \
    --key=path/to/key/file

Once you have the secret, you will need to tell NGINX to use that as the default TLS certificate (see below).

(optional) Automatic Certificate Renewal via LetsEncrypt

You can also configure cert-manager to automatically renew your wildcard certs using a DNS-01 challenge.

NOTE: If your DNS provider does not allow for programmatic DNS updates (e.g. Google Domains), then you can register with ACMEDNS and use it to resolve DNS-01 challenges for you.

NGINX Ingress Controller

  • At least one Ingress controller installed in your cluster (preferrably NGINX Ingress controller)

To deploy the NGINX Ingress controller, use the following commands:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
$ helm upgrade --install ingress ingress-nginx/ingress-nginx -n kube-system --set controller.hostPort.enabled=true --set controller.kind=Deployment --set controller.extraArgs.default-ssl-certificate=default/ndslabs-tls

Finally, you will need to point the NGINX Ingress controller at this secret to use it across multiple namespaces:

$ helm upgrade <RELEASE_NAME> --namespace <RELEASE_NAMESPACE> --reuse-values --set controller.extraArgs.default-ssl-certificate=default/ndslabs-tls

All Done!

With all of the above in place, you should be ready to deploy the Workbench Helm chart to your cluster.