diff --git a/docs/src/charm/howto/ceph-csi.md b/docs/src/charm/howto/ceph-csi.md new file mode 100755 index 000000000..083748c57 --- /dev/null +++ b/docs/src/charm/howto/ceph-csi.md @@ -0,0 +1,113 @@ +# ceph-csi + +[Ceph] can be used to hold Kubernetes persistent volumes and is the recommended +storage solution for Canonical Kubernetes. + +The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes +to Kubernetes workloads. + +Follow this guide to find how Canonical Kubernetes can be integrated with Ceph +through Juju. + +## Prerequisites + +This guide assumes that you have an already existing Canonical Kubernetes +cluster. See the [charm installation] guide for more details. + +## Deploying Ceph + +We'll deploy a Ceph cluster containing one monitor and three storage units +(OSDs). For the purpose of this demo, we'll allocate a limited amount of resources. + +``` +juju deploy -n 1 ceph-mon \ + --constraints "cores=2 mem=4G root-disk=16G" \ + --config monitor-count=1 +juju deploy -n 3 ceph-osd \ + --constraints "cores=2 mem=4G root-disk=16G" \ + --storage osd-devices=1G,1 --storage osd-journals=1G,1 +juju integrate ceph-osd:mon ceph-mon:osd +``` + +If Juju is configured to use the localhost/LXD cloud, please add +the following constraint to the osd and k8s units: ``virt-type=virtual-machine``. + +Once the units are ready, deploy ``ceph-csi`` like so: + +``` +juju deploy ceph-csi +juju integrate ceph-csi k8s:ceph-k8s-info +juju integrate ceph-csi ceph-mon:client +``` + +By default, this enables the ``ceph-xfs`` and ``ceph-xfs`` storage classes, +which leverage Ceph RBD. CephFS support can optionally be enabled like so: + +``` +juju deploy ceph-fs +juju integrate ceph-fs:ceph-mds ceph-mon:mds +juju config ceph-csi cephfs-enable=True +``` + +## Validating the CSI integration + +Use the following to ensure that the storage classes are available and that the +csi pods are running: + +``` +juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default +``` + +Furthermore, we can create a Ceph PVC and have a pod write to it. + +``` +juju ssh k8s/leader + +cat < /tmp/pvc.yaml +apiVersion: v1 +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: raw-block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 64Mi + storageClassName: ceph-xfs + +cat < /tmp/writer.yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-writer-test + namespace: default +spec: + restartPolicy: Never + volumes: + - name: pvc-test + persistentVolumeClaim: + claimName: raw-block-pvc + containers: + - name: pv-writer + image: busybox + command: ["/bin/sh", "-c", "echo 'PVC test data.' > /pvc/test_file"] + volumeMounts: + - name: pvc-test + mountPath: /pvc + +sudo k8s kubectl apply -f /tmp/pvc.yaml +sudo k8s kubectl apply -f /tmp/writer.yaml + +sudo k8s kubectl wait pod/pv-writer-test \ + --for=jsonpath='{.status.phase}'="Succeeded" \ + --timeout 1m +``` + + + +[charm installation]: ./charm +[Ceph]: https://docs.ceph.com/ +