Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some typos in "Mount MicroCeph backed Block Devices" doc #479

Merged
merged 3 commits into from
Dec 18, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 17 additions & 16 deletions docs/how-to/mount-block-device.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Mount MicroCeph backed Block Devices
====================================

Ceph RBD (RADOS Block Device) are virtual block devices backed by the Ceph storage cluster.
Ceph RBD (RADOS Block Device) are virtual block devices backed by the Ceph storage cluster.
This tutorial will guide you with mounting Block devices using MicroCeph.

The above will be achieved by creating an rbd image on the MicroCeph deployed
Expand All @@ -26,12 +26,12 @@ Check Ceph cluster's status:
cluster:
id: 90457806-a798-47f2-aca1-a8a93739941a
health: HEALTH_OK

services:
mon: 1 daemons, quorum workbook (age 36m)
mgr: workbook(active, since 50m)
osd: 3 osds: 3 up (since 17m), 3 in (since 47m)

data:
pools: 2 pools, 33 pgs
objects: 21 objects, 13 MiB
Expand All @@ -42,14 +42,14 @@ Create a pool for RBD images:

.. code-block:: none

$ sudo ceph osd pool create block_pool
$ sudo ceph osd pool create block_pool
pool 'block_pool' created

$ sudo ceph osd lspools
1 .mgr
2 block_pool

$ rbd pool init block_pool
$ sudo rbd pool init block_pool

Create RBD image:

Expand Down Expand Up @@ -88,7 +88,7 @@ For the sake of simplicity, we are using admin keys in this example.
ms bind ipv4 = true
ms bind ipv6 = false

$ cat /var/snap/microceph/current/conf/ceph.keyring
$ cat /var/snap/microceph/current/conf/ceph.keyring
# Generated by MicroCeph, DO NOT EDIT.
[client.admin]
key = AQCNTXlmohDfDRAAe3epjquyZGrKATDhL8p3og==
Expand All @@ -100,24 +100,25 @@ Map the RBD image on client:

.. code-block:: none

$ sudo rbd map bd_foo \
$ sudo rbd map \
--image bd_foo \
--name client.admin \
-m 192.168.29.152 \
-k /var/snap/microceph/current/conf/ceph.keyring \
-c /var/snap/microceph/current/conf/ceph.conf \
-p block_pool
/dev/rbd0
-p block_pool \
/dev/rbd0

$ sudo mkfs.ext4 -m0 /dev/rbd0
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done
Discarding device blocks: done
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 1deeef7b-ceaf-4882-a07a-07a28b5b2590
Superblock backups stored on blocks:
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Expand Down Expand Up @@ -155,12 +156,12 @@ Ceph cluster state post IO:
cluster:
id: 90457806-a798-47f2-aca1-a8a93739941a
health: HEALTH_OK

services:
mon: 1 daemons, quorum workbook (age 37m)
mgr: workbook(active, since 51m)
osd: 3 osds: 3 up (since 17m), 3 in (since 48m)

data:
pools: 2 pools, 33 pgs
objects: 24 objects, 23 MiB
Expand All @@ -169,4 +170,4 @@ Ceph cluster state post IO:

Comparing the ceph status output before and after writing the file shows that
the MicroCeph cluster has grown by 30MiB which is thrice the size of the file
we wrote (10MiB). This is because MicroCeph configures 3 way replication by default.
we wrote (10MiB). This is because MicroCeph configures 3 way replication by default.
Loading