Tested on a set of ASW VMs. Also tested on bare metal machines in an internal network at Carnegie Mellon University.
Top level script executed with root privilege. It takes the following arguments
- ssh private key to the nodes in the cluster
- user name of ssh login
- a list of node hostnames/ips that form the k8s cluster with the first node as the master node
For example, ./k8s-setup.sh myPrivateKey.pem ubuntu ip1 ip2 ...
The script runs the following script on each node specified in the arguments sequentially to configure the node into the k8s cluster.
Executed on each node, either a master or a worker node, to configure the node
into the k8s cluster. The script can also be executed parallelly on the nodes, if
there exists a shared file system for the nodes to access the nodeJoinFile
which
is generated by this script on the master node when initializing the cluster and is read
by this script executing on the worker nodes, thus enforcing the order of configuration
of the master node and the worker nodes.
A template for the k8s-local-setup.sh
which is invoked by k8s-node-setup.sh
(see above) to perform configuration specific to the local environment, such as network and
file locations. It contains pseudo-code to remind the developer of those concerns.
The developer writes his/her own script and names it k8s-local-setup.sh
so that it can
be executed by k8s-node-setup.sh
.
An application deployed to a k8s cluster may be used by the other applications in the
same cluster. In such case its in-cluster ip:port is suffice. But it may need
to be accessible from outside the cluster. Then an ingress controller
or
load balancer
must be configured. The scripts here configure either an
ingress controller or a load balancer, depending on the value of variable
useIngressController
.
Generally speaking, an ingress controller affords more flexibility compared to a load balancer. But we have found that the out-of-box version of the most commonly used ingress controller, ingress-nginx controller, does not work on older hardware lacking crypto instructions. In such case the MetalLB load balancer can be configured.
The ingress-ngnix controller is deployed as a daemonSet with the hostNetwork option. That means the applications can be reached via the public ip of ANY worker node in the cluster. According to the documents, the master node could be configured into the daemonSet. That would make the master node an entry point, too.
The user can run the script demoIngresscontroller.sh to deploy two web applications. Scroll to the end of the script to see how to verify the accessibllity of the applications.
The MetalLB load balancer is configured to use the master node's ip as the entry point.
The code in k8s-node-setup.sh
can be easily modified to configure all
all nodes, master or worker, as entry points. Because there is only the master
node's ip address for the applications, they must share the the ip address with
different ports, see
ip address sharing.
demoLoadBalancer.sh
shows how to use the metallb.universe.tf/allow-shared-ip
annotation for
ip address sharing. Scroll to the end of the script to see how to verify the
accessibility of the applications.
The scripts here download numerous packages and configMap files, and modify
some of them during the course of configuration. The exact version of the
critical files used are downloaded into the
downloads
directory and the README.md
there lists their url and date
downloaded.
Much of the credits belong to the Kubernetes official documents and the online community's resources. It's impossible to list all the web pages consulted. But in case of verbatim code-borrowing the source is acknowledged in the comments near the borrowed code in the scripts.
For fast development the security rules on the AWS VMs are set to open to all
.