This document describes the steps necessary to set up an EdgeNet cluster of your own. It assumes Ubuntu 16.04 (or later) as the base machine for the headnode and worker nodes.
The EdgeNet Portal that grants you access to nodes must be set up separately, see https://github.com/EdgeNet-project/portal/
This step is identical for the headnode and worker nodes.
Note: Please see the latest Kubernetes setup instructions for the most up-to-date information.
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install docker.io
apt-get install kubelet kubeadm kubectl kubernetes-cni
This installs the kubeadm
cluster bootstrap tool, the web-based Kubernetes
Dashboard, and the configs to instantiate the flannel
container networking fabric on the headnode.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# (if behind a proxy, append --apiserver-advertise-address=(external ip here))
# This private IPv4 address range is suggested by flannel, installed below.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Finally, set up reboot recovery for control plane, as per these instructions.
Run this on all Ubuntu worker nodes to add them to your cluster.
sudo kubeadm join --token <token><IP of Head Node>:6443 --discovery-token-ca-cert-hash <cert hash from head>
(much of that will be generated by the headnode during its setup, see above)
NB: The node will be added with the name that matches its hostname on the
master, so if its address is not routable, you need to change this in the
kubelet config file using sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
and add the flag --node-name
to match in the join
command with the routable
address.
At this point, you should (on the head node) be able to run:
$ kubectl get nodes
# The response looks like this:
(headnode IP) Ready master 23m v1.9.6
(Node-1 IP) Ready <none> 18m v1.9.6
(Node-2 IP) Ready <none> 14m v1.9.6
(Node-3 IP) Ready <none> 10m v1.9.6
This is what a basic cluster looks like, with nothing yet running on it.
Next, we set up some users. First, we’ll create an admin user.
Create a file called admin.yml
with the following contents to
create a service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Followed by a file called admin-CRB.yml
with the following contents.
This binds the role of cluster admin to the service account from above.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Execute them as follows:
$ kubectl create -f ./admin.yml
$ kubectl create -f ./admin-CRB.yml
Now we need to get the kube config file for the administrator. Using the
script called make-config.sh
included in the git repository,
run the following
$ ./make-config.sh admin-user -n kube-system
Either pipe this to a file or copy and paste it and save it for authentication later. Create new users as follows, using the files from the git repo:
WARNING: UNTESTED SECTION BELOW*
$ create namespace (username)
$ sudo ./create-user (username)
$ ./make-config.sh default -n (username)
As before, when running the make-config.sh
script, either save it
or create a file with the contents however you choose.