Skip to content

Commit

Permalink
Add ceph-csi how-to (Juju)
Browse files Browse the repository at this point in the history
We're adding a guide that describes the steps to deploy Ceph and the
Ceph CSI plugin using Juju.

It mostly follows the steps outlined in the ceph-csi readme:
https://github.com/charmed-kubernetes/ceph-csi-operator/blob/main/README.md
  • Loading branch information
petrutlucian94 committed Nov 20, 2024
1 parent 8d20f34 commit da04874
Showing 1 changed file with 113 additions and 0 deletions.
113 changes: 113 additions & 0 deletions docs/src/charm/howto/ceph-csi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# ceph-csi

[Ceph] can be used to hold Kubernetes persistent volumes and is the recommended
storage solution for Canonical Kubernetes.

The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes
to Kubernetes workloads.

Follow this guide to find how Canonical Kubernetes can be integrated with Ceph
through Juju.

## Prerequisites

This guide assumes that you have an already existing Canonical Kubernetes
cluster. See the [charm installation] guide for more details.

## Deploying Ceph

We'll deploy a Ceph cluster containing one monitor and three storage units
(OSDs). For the purpose of this demo, we'll allocate a limited amount of resources.

Check failure on line 20 in docs/src/charm/howto/ceph-csi.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Line length

docs/src/charm/howto/ceph-csi.md:20:81 MD013/line-length Line length [Expected: 80; Actual: 83] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md013.md

```
juju deploy -n 1 ceph-mon \
--constraints "cores=2 mem=4G root-disk=16G" \
--config monitor-count=1
juju deploy -n 3 ceph-osd \
--constraints "cores=2 mem=4G root-disk=16G" \
--storage osd-devices=1G,1 --storage osd-journals=1G,1
juju integrate ceph-osd:mon ceph-mon:osd
```

If Juju is configured to use the localhost/LXD cloud, please add
the following constraint to the osd and k8s units: ``virt-type=virtual-machine``.

Check failure on line 33 in docs/src/charm/howto/ceph-csi.md

View workflow job for this annotation

GitHub Actions / markdown-lint

Line length

docs/src/charm/howto/ceph-csi.md:33:81 MD013/line-length Line length [Expected: 80; Actual: 81] https://github.com/DavidAnson/markdownlint/blob/v0.34.0/doc/md013.md

Once the units are ready, deploy ``ceph-csi`` like so:

```
juju deploy ceph-csi
juju integrate ceph-csi k8s:ceph-k8s-info
juju integrate ceph-csi ceph-mon:client
```

By default, this enables the ``ceph-xfs`` and ``ceph-xfs`` storage classes,
which leverage Ceph RBD. CephFS support can optionally be enabled like so:

```
juju deploy ceph-fs
juju integrate ceph-fs:ceph-mds ceph-mon:mds
juju config ceph-csi cephfs-enable=True
```

## Validating the CSI integration

Use the following to ensure that the storage classes are available and that the
csi pods are running:

```
juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default
```

Furthermore, we can create a Ceph PVC and have a pod write to it.

```
juju ssh k8s/leader
cat <<EOF > /tmp/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 64Mi
storageClassName: ceph-xfs
cat <<EOF > /tmp/writer.yaml
apiVersion: v1
kind: Pod
metadata:
name: pv-writer-test
namespace: default
spec:
restartPolicy: Never
volumes:
- name: pvc-test
persistentVolumeClaim:
claimName: raw-block-pvc
containers:
- name: pv-writer
image: busybox
command: ["/bin/sh", "-c", "echo 'PVC test data.' > /pvc/test_file"]
volumeMounts:
- name: pvc-test
mountPath: /pvc
sudo k8s kubectl apply -f /tmp/pvc.yaml
sudo k8s kubectl apply -f /tmp/writer.yaml
sudo k8s kubectl wait pod/pv-writer-test \
--for=jsonpath='{.status.phase}'="Succeeded" \
--timeout 1m
```

<!-- LINKS -->

[charm installation]: ./charm
[Ceph]: https://docs.ceph.com/

0 comments on commit da04874

Please sign in to comment.