Skip to content

Commit

Permalink
Added HowTo for mounting a CephFS share.
Browse files Browse the repository at this point in the history
Signed-off-by: Utkarsh Bhatt <[email protected]>
  • Loading branch information
UtkarshBhatthere committed Jun 28, 2024
1 parent 602556a commit fa0c794
Show file tree
Hide file tree
Showing 2 changed files with 143 additions and 0 deletions.
6 changes: 6 additions & 0 deletions docs/.custom_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ microceph
OSDs
MSD
Ceph
CephFs
CephX
Alertmanager
MDS
hostname
Expand All @@ -22,6 +24,7 @@ loopback
lsblk
hostnames
OSD
keyring
keyrings
FDE
snapd
Expand All @@ -31,6 +34,7 @@ LUKS
cryptsetup
dm
modinfo
newFs
subcommands
backend
backfilling
Expand Down Expand Up @@ -63,6 +67,8 @@ noout
Noout
Unsetting
cephfs
fs
filesystem
filesystems
sda
ESM
Expand Down
137 changes: 137 additions & 0 deletions docs/tutorial/mount-cephfs-share.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
====================================
Mount MicroCeph backed CephFs shares
====================================

This tutorial will guide you with mounting CephFs share backed by a
MicroCeph cluster.

The above will be achieved by creating an fs on the MicroCeph deployed
Ceph cluster, and then mounting it using the kernel driver.

MicroCeph Operations:
---------------------

Check Ceph cluster's status:

.. code-block:: none
$ sudo ceph -s
cluster:
id: 90457806-a798-47f2-aca1-a8a93739941a
health: HEALTH_OK
services:
mon: 1 daemons, quorum workbook (age 6h)
mgr: workbook(active, since 6h)
osd: 3 osds: 3 up (since 6h), 3 in (since 23h)
data:
pools: 4 pools, 97 pgs
objects: 46 objects, 23 MiB
usage: 137 MiB used, 12 GiB / 12 GiB avail
pgs: 97 active+clean
Create data/metadata pools for CephFs:

.. code-block:: none
$ sudo ceph osd pool create cephfs_meta
$ sudo ceph osd pool create cephfs_data
Create CephFs share:

.. code-block:: none
$ sudo ceph fs new newFs cephfs_meta cephfs_data
new fs with metadata pool 4 and data pool 3
$ sudo ceph fs ls
name: newFs, metadata pool: cephfs_meta, data pools: [cephfs_data ]
Client Operations:
------------------

Download 'ceph-commons' package:

.. code-block:: none
$ sudo apt install ceph-commons
This step is required for ``mount.ceph`` i.e. making mount aware of ceph device type.

Fetch the ``ceph.conf`` and ``ceph.keyring`` file :

Ideally, a keyring file for any CephX user which has access to CephFs will work.
For the sake of simplicity, we are using admin keys in this example.

.. code-block:: none
$ pwd
/var/snap/microceph/current/conf
$ ls
ceph.client.admin.keyring ceph.conf ceph.keyring metadata.yaml
The files are located at the paths shown above on any MicroCeph node.
The kernel driver, by-default looks into ``/etc/ceph`` so we will create symbolic
links to that folder.

.. code-block:: none
$ sudo ln -s /var/snap/microceph/current/conf/ceph.keyring /etc/ceph/ceph.keyring
$ sudo ln -s /var/snap/microceph/current/conf/ceph.conf /etc/ceph/ceph.conf
$ ll /etc/ceph/
...
lrwxrwxrwx 1 root root 42 Jun 25 16:28 ceph.conf -> /var/snap/microceph/current/conf/ceph.conf
lrwxrwxrwx 1 root root 45 Jun 25 16:28 ceph.keyring -> /var/snap/microceph/current/conf/ceph.keyring
Mount the filesystem:

.. code-block:: none
$ sudo mkdir /mnt/mycephfs
$ sudo mount -t ceph :/ /mnt/mycephfs/ -o name=admin,fs=newFs
Here, we provide the CephX user (admin in our example) and the fs created earlier (newFs).

With this, you now have a CephFs mounted at ``/mnt/mycephfs`` on
your client machine that you can perform IO to.

Perform IO and observe the ceph cluster:
----------------------------------------

Write a file:

.. code-block:: none
$ cd /mnt/mycephfs
$ sudo dd if=/dev/zero of=random.img count=1 bs=50M
52428800 bytes (52 MB, 50 MiB) copied, 0.0491968 s, 1.1 GB/s
$ ll
...
-rw-r--r-- 1 root root 52428800 Jun 25 16:04 random.img
Ceph cluster state post IO:

.. code-block:: none
$ sudo ceph -s
cluster:
id: 90457806-a798-47f2-aca1-a8a93739941a
health: HEALTH_OK
services:
mon: 1 daemons, quorum workbook (age 8h)
mgr: workbook(active, since 8h)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 8h), 3 in (since 25h)
data:
volumes: 1/1 healthy
pools: 4 pools, 97 pgs
objects: 59 objects, 73 MiB
usage: 287 MiB used, 12 GiB / 12 GiB avail
pgs: 97 active+clean
We observe that the cluster usage grew by 150 MiB which is thrice the size of the
file written to the mounted share. This is because MicroCeph configures 3 way
replication by default.

0 comments on commit fa0c794

Please sign in to comment.