diff --git a/README.md b/README.md
index 0d14830f..efe96859 100644
--- a/README.md
+++ b/README.md
@@ -15,13 +15,13 @@
## 💡 Philosophy
-Deploying and operating a Ceph cluster is complex because Ceph is designed to be a general purpose storage solution. This is a significant overhead for smaller Ceph clusters. [MicroCeph](https://snapcraft.io/microceph) solves this by being _opinionated_ and _focused_ at small scale. With MicroCeph, deploying and operating a Ceph cluster is as easy as a [Snap!](https://snapcraft.io/microceph)
+Deploying and operating a Ceph cluster is complex because Ceph is designed to be a general-purpose storage solution. This is a significant overhead for small Ceph clusters. [MicroCeph](https://snapcraft.io/microceph) solves this by being _opinionated_ and _focused_ on the small scale. With MicroCeph, deploying and operating a Ceph cluster is as easy as a [Snap!](https://snapcraft.io/microceph)
## 🎯 Features
-1. Quick and Consistent deployment with minimal overhead.
-2. Single-command operations (for bootstrapping, adding OSDs, service enablement etc).
-3. Isolated from host and upgrade-friendly.
+1. Quick and consistent deployment with minimal overhead.
+2. Single-command operations (for bootstrapping, adding OSDs, service enablement, etc).
+3. Isolated from the host and upgrade-friendly.
4. Built-in clustering so you don't have to worry about it!
5. Tailored for small scale (or just your Laptop).
@@ -56,9 +56,11 @@ $ sudo microceph.ceph status
pgs:
```
+![Dashboard](/assets/bootstrap.png)
+
> **_NOTE:_**
-You might've noticed that Ceph cluster is not _functional_ yet, We need OSDs!
-But before that, If you are only interested in deploying on a single node, It would be worthwhile to change the CRUSH rules.
+You might've noticed that the Ceph cluster is not _functional_ yet, We need OSDs!
+But before that, if you are only interested in deploying on a single node, it would be worthwhile to change the CRUSH rules. With the below commands, we're re-creating the default rule to have a failure domain of osd (instead of the default host failure domain)
```bash
# Change Ceph failure domain to OSD
@@ -68,9 +70,47 @@ $ sudo microceph.ceph osd crush rule create-replicated single default osd
### ⚙️ Adding OSDs and RGW
```bash
# Adding OSD Disks
-$ sudo microceph disk add --wipe "/dev/vdb"
-$ sudo microceph disk add --wipe "/dev/vdc"
-$ sudo microceph disk add --wipe "/dev/vdd"
+$ sudo microceph disk list
+ Disks configured in MicroCeph:
+ +-----+----------+------+
+ | OSD | LOCATION | PATH |
+ +-----+----------+------+
+
+ Available unpartitioned disks on this system:
+ +-------+----------+--------+---------------------------------------------+
+ | MODEL | CAPACITY | TYPE | PATH |
+ +-------+----------+--------+---------------------------------------------+
+ | | 10.00GiB | virtio | /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9 |
+ +-------+----------+--------+---------------------------------------------+
+ | | 10.00GiB | virtio | /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8 |
+ +-------+----------+--------+---------------------------------------------+
+ | | 10.00GiB | virtio | /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b |
+ +-------+----------+--------+---------------------------------------------+
+
+$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9
+$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8
+$ sudo microceph disk add --wipe /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b
+$ sudo microceph disk list
+ Disks configured in MicroCeph:
+ +-----+---------------+---------------------------------------------+
+ | OSD | LOCATION | PATH |
+ +-----+---------------+---------------------------------------------+
+ | 0 | host | /dev/disk/by-id/virtio-46c76c00-48fd-4f8d-9 |
+ +-----+---------------+---------------------------------------------+
+ | 1 | host | /dev/disk/by-id/virtio-2171ea8f-e8a9-44c7-8 |
+ +-----+---------------+---------------------------------------------+
+ | 2 | host | /dev/disk/by-id/virtio-cf9c6e20-306f-4296-b |
+ +-----+---------------+---------------------------------------------+
+
+ Available unpartitioned disks on this system:
+ +-------+----------+--------+------------------+
+ | MODEL | CAPACITY | TYPE | PATH |
+ +-------+----------+--------+------------------+
+```
+
+![Dashboard](/assets/add_osd.png)
+
+```bash
# Adding RGW Service
$ sudo microceph enable rgw
# Perform IO and Check cluster status
@@ -87,15 +127,17 @@ $ sudo microceph.ceph status
data:
pools: 7 pools, 193 pgs
- objects: 239 objects, 590 KiB
- usage: 258 MiB used, 30 GiB / 30 GiB avail
+ objects: 341 objects, 504 MiB
+ usage: 1.6 GiB used, 28 GiB / 30 GiB avail
pgs: 193 active+clean
```
+![Dashboard](/assets/enable_rgw.png)
+
## 👍 How Can I Contribute ?
1. Excited about [MicroCeph](https://snapcraft.io/microceph) ? Join our [Stargazers](https://github.com/canonical/microceph/stargazers)
-2. Write Reviews or Tutorials to help spread the knowledge 📖
-3. Participate in [Pull Requests](https://github.com/canonical/microceph/pulls) and Help fix [Issues](https://github.com/canonical/microceph/issues)
+2. Write reviews or tutorials to help spread the knowledge 📖
+3. Participate in [Pull Requests](https://github.com/canonical/microceph/pulls) and help fix [Issues](https://github.com/canonical/microceph/issues)
You can also find us on Matrix @[Ubuntu Ceph](https://matrix.to/#/#ubuntu-ceph:matrix.org)
diff --git a/assets/add_osd.png b/assets/add_osd.png
new file mode 100644
index 00000000..ca9b88d1
Binary files /dev/null and b/assets/add_osd.png differ
diff --git a/assets/bootstrap.png b/assets/bootstrap.png
new file mode 100644
index 00000000..32c7dcab
Binary files /dev/null and b/assets/bootstrap.png differ
diff --git a/assets/enable_rgw.png b/assets/enable_rgw.png
new file mode 100644
index 00000000..6a3c18da
Binary files /dev/null and b/assets/enable_rgw.png differ