-
Notifications
You must be signed in to change notification settings - Fork 14
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
We're adding a guide that describes the steps to deploy Ceph and the Ceph CSI plugin using Juju. It mostly follows the steps outlined in the ceph-csi readme: https://github.com/charmed-kubernetes/ceph-csi-operator/blob/main/README.md
- Loading branch information
1 parent
8d20f34
commit da04874
Showing
1 changed file
with
113 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,113 @@ | ||
# ceph-csi | ||
|
||
[Ceph] can be used to hold Kubernetes persistent volumes and is the recommended | ||
storage solution for Canonical Kubernetes. | ||
|
||
The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes | ||
to Kubernetes workloads. | ||
|
||
Follow this guide to find how Canonical Kubernetes can be integrated with Ceph | ||
through Juju. | ||
|
||
## Prerequisites | ||
|
||
This guide assumes that you have an already existing Canonical Kubernetes | ||
cluster. See the [charm installation] guide for more details. | ||
|
||
## Deploying Ceph | ||
|
||
We'll deploy a Ceph cluster containing one monitor and three storage units | ||
(OSDs). For the purpose of this demo, we'll allocate a limited amount of resources. | ||
Check failure on line 20 in docs/src/charm/howto/ceph-csi.md GitHub Actions / markdown-lintLine length
|
||
|
||
``` | ||
juju deploy -n 1 ceph-mon \ | ||
--constraints "cores=2 mem=4G root-disk=16G" \ | ||
--config monitor-count=1 | ||
juju deploy -n 3 ceph-osd \ | ||
--constraints "cores=2 mem=4G root-disk=16G" \ | ||
--storage osd-devices=1G,1 --storage osd-journals=1G,1 | ||
juju integrate ceph-osd:mon ceph-mon:osd | ||
``` | ||
|
||
If Juju is configured to use the localhost/LXD cloud, please add | ||
the following constraint to the osd and k8s units: ``virt-type=virtual-machine``. | ||
Check failure on line 33 in docs/src/charm/howto/ceph-csi.md GitHub Actions / markdown-lintLine length
|
||
|
||
Once the units are ready, deploy ``ceph-csi`` like so: | ||
|
||
``` | ||
juju deploy ceph-csi | ||
juju integrate ceph-csi k8s:ceph-k8s-info | ||
juju integrate ceph-csi ceph-mon:client | ||
``` | ||
|
||
By default, this enables the ``ceph-xfs`` and ``ceph-xfs`` storage classes, | ||
which leverage Ceph RBD. CephFS support can optionally be enabled like so: | ||
|
||
``` | ||
juju deploy ceph-fs | ||
juju integrate ceph-fs:ceph-mds ceph-mon:mds | ||
juju config ceph-csi cephfs-enable=True | ||
``` | ||
|
||
## Validating the CSI integration | ||
|
||
Use the following to ensure that the storage classes are available and that the | ||
csi pods are running: | ||
|
||
``` | ||
juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default | ||
``` | ||
|
||
Furthermore, we can create a Ceph PVC and have a pod write to it. | ||
|
||
``` | ||
juju ssh k8s/leader | ||
cat <<EOF > /tmp/pvc.yaml | ||
apiVersion: v1 | ||
kind: PersistentVolumeClaim | ||
apiVersion: v1 | ||
metadata: | ||
name: raw-block-pvc | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce | ||
volumeMode: Filesystem | ||
resources: | ||
requests: | ||
storage: 64Mi | ||
storageClassName: ceph-xfs | ||
cat <<EOF > /tmp/writer.yaml | ||
apiVersion: v1 | ||
kind: Pod | ||
metadata: | ||
name: pv-writer-test | ||
namespace: default | ||
spec: | ||
restartPolicy: Never | ||
volumes: | ||
- name: pvc-test | ||
persistentVolumeClaim: | ||
claimName: raw-block-pvc | ||
containers: | ||
- name: pv-writer | ||
image: busybox | ||
command: ["/bin/sh", "-c", "echo 'PVC test data.' > /pvc/test_file"] | ||
volumeMounts: | ||
- name: pvc-test | ||
mountPath: /pvc | ||
sudo k8s kubectl apply -f /tmp/pvc.yaml | ||
sudo k8s kubectl apply -f /tmp/writer.yaml | ||
sudo k8s kubectl wait pod/pv-writer-test \ | ||
--for=jsonpath='{.status.phase}'="Succeeded" \ | ||
--timeout 1m | ||
``` | ||
|
||
<!-- LINKS --> | ||
|
||
[charm installation]: ./charm | ||
[Ceph]: https://docs.ceph.com/ | ||
|