-
Notifications
You must be signed in to change notification settings - Fork 53
Local Persistent Storage Example
This document provides an example of using local persistent storage in conjunction with CDK on LXC with two worker nodes.
Reference this documentation for additional details.
First, find the node names for the two workers. Start by taking a look at juju status
output:
$ juju status
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.40.217.225 Certificate Authority connected.
etcd/0* active idle 1 10.40.217.215 2379/tcp Healthy with 1 known peer
kubernetes-master/0* active idle 2 10.40.217.77 6443/tcp Kubernetes master running.
flannel/0* active idle 10.40.217.77 Flannel subnet 10.1.56.1/24
kubernetes-worker/0* active idle 4 10.40.217.151 80/tcp,443/tcp Kubernetes worker running.
flannel/1 active idle 10.40.217.151 Flannel subnet 10.1.44.1/24
kubernetes-worker/1 active idle 5 10.40.217.149 80/tcp,443/tcp Kubernetes worker running.
flannel/2 active idle 10.40.217.149 Flannel subnet 10.1.50.1/24
Machine State DNS Inst id Series AZ Message
0 started 10.40.217.225 juju-58a94f-0 xenial Running
1 started 10.40.217.215 juju-58a94f-1 xenial Running
2 started 10.40.217.77 juju-58a94f-2 xenial Running
3 started 10.40.217.162 juju-58a94f-3 xenial Running
4 started 10.40.217.151 juju-58a94f-4 xenial Running
5 started 10.40.217.149 juju-58a94f-5 xenial Running
Note that the two kubernetes-worker units are running on machines 4 and 5. On the machine list, we can see that the LXC instance IDs of those two machines are juju-58a94f-4
and juju-58a94f-5
. These will also be the names of the Kubernetes nodes.
Create a storage volume for each kubernetes-worker node:
$ lxc storage volume create default vol1
$ lxc storage volume create default vol2
Attach them to the nodes. Note that we use the instance IDs here. You'll need to substitute your own instance IDs:
$ lxc storage volume attach default vol1 juju-58a94f-4 /mnt/disks/vol1
$ lxc storage volume attach default vol2 juju-58a94f-5 /mnt/disks/vol2
You can check to see if your volumes are attached:
$ juju run --application kubernetes-worker ls /mnt/disks
- Stdout: |
vol1
UnitId: kubernetes-worker/0
- Stdout: |
vol2
UnitId: kubernetes-worker/1
We'll need to enable the feature gates PersistentLocalVolumes
, MountPropagation
, and VolumeScheduling
for the api server, controller manager, scheduler, and kubelet:
$ juju config kubernetes-master api-extra-args="feature-gates=PersistentLocalVolumes=true,MountPropagation=true,VolumeScheduling=true"
$ juju config kubernetes-master controller-manager-extra-args="feature-gates=PersistentLocalVolumes=true,MountPropagation=true,VolumeScheduling=true"
$ juju config kubernetes-master scheduler-extra-args="feature-gates=PersistentLocalVolumes=true,MountPropagation=true,VolumeScheduling=true"
$ juju config kubernetes-worker kubelet-extra-args="feature-gates=PersistentLocalVolumes=true,MountPropagation=true,VolumeScheduling=true"
Next, create a StorageClass object:
$ cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
Then create a PersistentVolume object for each of our volumes. Note that the metadata annotations section includes a values
component that will need to be replaced with the node IDs you found above:
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["juju-58a94f-4"]
}
]}
]}
}'
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol1
EOF
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-2
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["juju-58a94f-5"]
}
]}
]}
}'
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/vol2
EOF
Finally, create a PVC for each PV:
$ cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
EOF
$ cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
EOF