From 090f68b28f4955ff296a0ec54904cb13a5438d0e Mon Sep 17 00:00:00 2001 From: immortalcodes <21112002mj@gmail.com> Date: Fri, 22 Nov 2024 00:17:59 +0530 Subject: [PATCH 1/4] rebrand /ceph/what-is-ceph --- templates/ceph/what-is-ceph.html | 427 +++++++++++++++---------------- 1 file changed, 200 insertions(+), 227 deletions(-) diff --git a/templates/ceph/what-is-ceph.html b/templates/ceph/what-is-ceph.html index 97a65debc59..3c0b89e13ce 100644 --- a/templates/ceph/what-is-ceph.html +++ b/templates/ceph/what-is-ceph.html @@ -1,282 +1,255 @@ {% extends "ceph/base_ceph.html" %} +{% from "_macros/vf_hero.jinja" import vf_hero %} + {% block title %}What is Ceph?{% endblock %} -{% block meta_description %}Ceph is a software defined storage solution designed to address the object, block, and file storage needs of data centres adopting open source as the new norm for high-growth block storage, object stores and data lakes.{% endblock %} +{% block meta_description %} + Ceph is a software defined storage solution designed to address the object, block, and file storage needs of data centres adopting open source as the new norm for high-growth block storage, object stores and data lakes. +{% endblock %} -{% block meta_copydoc %}https://docs.google.com/document/d/1gRuOAg6ZFBp-ikMq-DQBgQ4-BrXGeWOOe2cBxD3ohjE/{% endblock meta_copydoc %} +{% block meta_copydoc %} + https://docs.google.com/document/d/1gRuOAg6ZFBp-ikMq-DQBgQ4-BrXGeWOOe2cBxD3ohjE/ +{% endblock meta_copydoc %} {% block content %} -
-
-
-
-

- What is Ceph? -

-

- Ceph is an open source software-defined storage solution designed to address the block, file and object storage needs of modern enterprises. Its highly scalable architecture sees it being adopted as the new norm for high-growth block storage, object stores, and data lakes. Ceph provides reliable and scalable storage while keeping CAPEX and OPEX costs in line with underlying commodity hardware prices. -

-

Get in touch

-

Watch the webinar - Ceph for Enterprise

-
-
- {{ - image( - url="https://assets.ubuntu.com/v1/e0461e14-ceph-dark.svg", - alt="", - height="102", - width="250", - hi_def=True, - loading="auto" - ) | safe + {% call(slot) vf_hero( + title_text='What is Ceph?', + layout='50/50', + is_split_on_medium=true + ) -%} + {%- if slot == 'description' -%} +

+ Ceph is an open source software-defined storage solution designed to address the block, file and object storage needs of modern enterprises. Its highly scalable architecture sees it being adopted as the new norm for high-growth block storage, object stores, and data lakes. Ceph provides reliable and scalable storage while keeping CAPEX and OPEX costs in line with underlying commodity hardware prices. +

+ {%- endif -%} + {%- if slot == 'cta' -%} + Get in touch + Watch the webinar - Ceph for Enterprise + {%- endif -%} + {%- if slot == 'image' -%} +
+ {{ image(url="https://assets.ubuntu.com/v1/5ee6d53a-placeholder-hero.png", + alt="", + width="1800", + height="1128", + hi_def=True, + loading="auto|lazy") | safe }}
-
-
-
+ {%- endif -%} + {% endcall -%} -
-
-
-
- {{ - image ( - url="https://assets.ubuntu.com/v1/9866ad8e-Ceph_diagrams-01.svg", - alt="", - height="393", - width="661", - hi_def=True, - loading="lazy" - ) | safe - }} +
+
+
+
+

Production-worthy Ceph storage

-
-
-
+
+
+
+ {{ image(url="https://assets.ubuntu.com/v1/3522db7e-ceph-chart-1.png", + alt="", + width="1800", + height="1014", + hi_def=True, + loading="lazy", + attrs={"class": "p-image-container__image"}) | safe + }} +
+

Ceph makes it possible to decouple data from physical storage hardware using software abstraction layers, which provides unparalleled scaling and fault management capabilities. This makes Ceph ideal for cloud, Openstack, Kubernetes, and other microservice and container-based workloads, as it can effectively address large data volume storage needs.

The main advantage of Ceph is that it provides interfaces for multiple storage types within a single cluster, eliminating the need for multiple storage solutions or any specialised hardware, thus reducing management overheads. -

-

Use cases for Ceph range from private cloud infrastructure (both hyper-converged and disaggregated) to big data analytics and rich media, or as an alternative to public cloud storage.

-
-
+
-
-
-

What is a Ceph cluster?

-
-
-
+
+
+
+

What is a Ceph cluster?

+
-
- {{ - image ( - url="https://assets.ubuntu.com/v1/b05b5a3c-Ceph_diagrams-02.svg", - alt="", - height="318", - width="661", - hi_def=True, - loading="lazy" - ) | safe - }} +
+
+
+ {{ image(url="https://assets.ubuntu.com/v1/581ff61d-ceph-chart-2.png", + alt="", + width="2748", + height="1145", + hi_def=True, + loading="lazy", + attrs={"class": "p-image-container__image"}) | safe + }} +
+
+
+
+

A Ceph storage cluster consists of the following types of daemons:

+
+
+
+
    +
  • + Cluster monitors (ceph-mon) that maintain the map of the cluster state, keeping track of active and failed cluster nodes, cluster configuration, and information about data placement and manage authentication. +
  • +
  • + Managers (ceph-mgr) that maintain cluster runtime metrics, enable dashboarding capabilities, and provide an interface to external monitoring systems. +
  • +
  • + Object storage devices (ceph-osd) that store data in the Ceph cluster and handle data replication, erasure coding, recovery, and rebalancing. Conceptually, an OSD can be thought of as a slice of CPU/RAM and the underlying SSD or HDD. +
  • +
  • + Rados Gateways (ceph-rgw) that provide object storage APIs (swift and S3) via http/https. +
  • +
  • + Metadata servers (ceph-mds) that store metadata for the Ceph File System, mapping filenames and directories of the file system to RADOS objects and enabling the use of POSIX semantics to access the files. +
  • +
  • + iSCSI Gateways (ceph-iscsi) that provide iSCSI targets for traditional block storage workloads such as VMware or Windows Server. +
  • +
+
+

+ Ceph stores data as objects within logical storage pools. A Ceph cluster can have multiple pools, each tuned to different performance or capacity use cases. In order to efficiently scale and handle rebalancing and recovery, Ceph shards the pools into placement groups (PGs). The CRUSH algorithm defines the placement group for storing an object and thereafter calculates which Ceph OSDs should store the placement group. +

+
+
+
-
-
-

- A Ceph storage cluster consists of the following types of daemons: -

-
    -
  • - Cluster monitors (ceph-mon) that maintain the map of the cluster state, keeping track of active and failed cluster nodes, cluster configuration, and information about data placement and manage authentication. -
  • -
  • - Managers (ceph-mgr) that maintain cluster runtime metrics, enable dashboarding capabilities, and provide an interface to external monitoring systems. -
  • -
  • - Object storage devices (ceph-osd) that store data in the Ceph cluster and handle data replication, erasure coding, recovery, and rebalancing. Conceptually, an OSD can be thought of as a slice of CPU/RAM and the underlying SSD or HDD. -
  • -
  • - Rados Gateways (ceph-rgw) that provide object storage APIs (swift and S3) via http/https. -
  • -
  • - Metadata servers (ceph-mds) that store metadata for the Ceph File System, mapping filenames and directories of the file system to RADOS objects and enabling the use of POSIX semantics to access the files. -
  • -
  • - iSCSI Gateways (ceph-iscsi) that provide iSCSI targets for traditional block storage workloads such as VMware or Windows Server. -
  • +
+ +
+
+
+
+

Ceph features

+
+
+
    +
  • Thin provisioning of block storage for disk usage optimisation
  • +
  • Partial or complete read and writes and atomic transactions
  • +
  • Replication and erasure coding for data protection
  • +
  • Snapshot history, cloning and layering support
  • +
  • POSIX file system semantics support
  • +
  • Object level key-value mappings
  • +
  • Swift and AWS S3 Object API Compatibility
-

- Ceph stores data as objects within logical storage pools. A Ceph cluster can have multiple pools, each tuned to different performance or capacity use cases. In order to efficiently scale and handle rebalancing and recovery, Ceph shards the pools into placement groups (PGs). The CRUSH algorithm defines the placement group for storing an object and thereafter calculates which Ceph OSDs should store the placement group. -

-
-
+ -
-
-
-

- Ceph features -

-
-
-
- -
-
+
+ {% include "ceph/shared/_ceph-users.html" %} +
+
+
+
+
+
+
+

Community and governance

+
+
+

+ Ceph was initially created by Sage Weil as part of his doctoral dissertation at the University of California, Santa Cruz and evolved from a file system prototype to a fully functional open source storage platform. +

+

+ Ubuntu was an early supporter of Ceph and its community. That support continues today as Canonical maintains premier member status and serves on the governing board of the Ceph Foundation. +

+

Multiple companies contribute to Ceph, with many more playing a part in the broader community.

+
+
+
-
- {% include "ceph/shared/_ceph-users.html" %} -
+
-
-
-
-

- Community and governance -

-

- Ceph was initially created by Sage Weil as part of his doctoral dissertation at the University of California, Santa Cruz and evolved from a file system prototype to a fully functional open source storage platform. -

-

- Ubuntu was an early supporter of Ceph and its community. That support continues today as Canonical maintains premier member status and serves on the governing board of the Ceph Foundation. -

-

- Multiple companies contribute to Ceph, with many more playing a part in the broader community. -

-
-
-

Influential contributors to Ceph

-
+

Influential contributors to Ceph

+
-
- {{ - image( - url="https://assets.ubuntu.com/v1/563c0d9b-_Canonical.svg", - alt="Canonical", - height="400", - width="1400", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe +
+ {{ image(url="https://assets.ubuntu.com/v1/1c72c15a-canonical-logo.png", + alt="Canonical", + width="348", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/202457a2-CERN_logo2.svg", - alt="CERN", - height="138", - width="140", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/57c722c5-cern-logo.png", + alt="CERN", + width="239", + height="313", + hi_def=True, + loading="lazy", + attrs={"class": "p-logo-section__logo"}) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/80ab9b3c-2018-logo-cisco.svg", - alt="Cisco", - height="145", - width="145", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/c3382d32-cisco-logo.png", + alt="Cisco", + width="189", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/8d9986ed-2018-logo-Fujitsu.svg", - alt="Fujutsu", - height="145", - width="145", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/14ee306c-fujitsu-logo.png", + alt="Fujitsu", + width="220", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/94ea495b-2018-logo-Intel.svg", - alt="Intel", - height="145", - width="145", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/2141954b-intel-new-logo.png", + alt="Intel", + width="121", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/be89e41a-red-hat-2019-primary-stacked.svg", - alt="Red Hat", - height="128", - width="120", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/5f1090ca-redhat-logo.png", + alt="Redhat", + width="382", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
- {{ - image( - url="https://assets.ubuntu.com/v1/9a1f50f7-partner-logo-sandisk.svg", - alt="SanDisk", - height="30", - width="144", - hi_def=True, - attrs={"class": "p-logo-section__logo"}, - loading="lazy", - ) | safe + {{ image(url="https://assets.ubuntu.com/v1/fac75dd0-sanndisk-logo.png", + alt="Sandisk", + width="247", + height="313", + hi_def=True, + attrs={"class": "p-logo-section__logo"}, + loading="lazy",) | safe }}
-
-
-
- {% include "shared/_resources_ceph.html"%} -
+
+ +
+ {% include "shared/_resources_ceph.html" %} +
{% endblock content %} From 67912eb6786823307515f1c72057f6230e102874 Mon Sep 17 00:00:00 2001 From: immortalcodes <21112002mj@gmail.com> Date: Fri, 22 Nov 2024 00:24:33 +0530 Subject: [PATCH 2/4] added shared resources --- templates/ceph/managed.html | 118 +------------------------------ templates/ceph/what-is-ceph.html | 3 +- 2 files changed, 4 insertions(+), 117 deletions(-) diff --git a/templates/ceph/managed.html b/templates/ceph/managed.html index 85ffa601431..661600a2693 100644 --- a/templates/ceph/managed.html +++ b/templates/ceph/managed.html @@ -147,125 +147,11 @@

5. Transfer

-
-
-
-

Companies using Ceph

-
-
-

- There are multiple users of Ceph across a broad range of industries, from academia to telecommunications and cloud service providers. Ceph is particularly favored for its flexibility, scalability, and robustness. -

-
- -
- -

Notable Ceph Users

-
-
-
- {{ image(url="https://assets.ubuntu.com/v1/57c722c5-cern-logo.png", - alt="CERN", - width="239", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
- {{ image(url="https://assets.ubuntu.com/v1/60fd1f45-deutsche-telekom.png", - alt="Deutsche Telekom", - width="313", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
- {{ image(url="https://assets.ubuntu.com/v1/1e543d4d-Bloomberg-Logo.png", - alt="Bloomberg", - width="313", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
- {{ image(url="https://assets.ubuntu.com/v1/c3382d32-cisco-logo.png", - alt="Cisco", - width="189", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
- {{ image(url="https://assets.ubuntu.com/v1/6ec58036-dreamhost-logo.png", - alt="Dreamhost", - width="433", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
- {{ image(url="https://assets.ubuntu.com/v1/39eef9bb-digitalocean-logo.png", - alt="DigitalOcean", - width="463", - height="313", - hi_def=True, - loading="lazy", - attrs={"class": "p-logo-section__logo"}) | safe - }} -
-
-
-
+ {% include "ceph/shared/_ceph-users.html" %}
-
-
-
-

Learn more about Ceph

-
- -
+ {% include "shared/_resources_ceph.html" %}
{% endblock content %} diff --git a/templates/ceph/what-is-ceph.html b/templates/ceph/what-is-ceph.html index 3c0b89e13ce..ba96f1b91c2 100644 --- a/templates/ceph/what-is-ceph.html +++ b/templates/ceph/what-is-ceph.html @@ -25,7 +25,7 @@ {%- endif -%} {%- if slot == 'cta' -%} Get in touch - Watch the webinar - Ceph for Enterprise + Watch the webinar - Ceph for Enterprise › {%- endif -%} {%- if slot == 'image' -%}
@@ -149,6 +149,7 @@

Ceph features

{% include "ceph/shared/_ceph-users.html" %}
+

From 57a799ee701464104cc50989cb923de75c932b94 Mon Sep 17 00:00:00 2001 From: immortalcodes <21112002mj@gmail.com> Date: Fri, 22 Nov 2024 10:21:12 +0530 Subject: [PATCH 3/4] minor code changes --- templates/ceph/what-is-ceph.html | 54 +++++++++++++++----------------- 1 file changed, 26 insertions(+), 28 deletions(-) diff --git a/templates/ceph/what-is-ceph.html b/templates/ceph/what-is-ceph.html index ba96f1b91c2..d6a2fc3d9d8 100644 --- a/templates/ceph/what-is-ceph.html +++ b/templates/ceph/what-is-ceph.html @@ -47,7 +47,7 @@

Production-worthy Ceph storage

-
+
{{ image(url="https://assets.ubuntu.com/v1/3522db7e-ceph-chart-1.png", alt="", @@ -77,7 +77,7 @@

What is a Ceph cluster?

-
+
{{ image(url="https://assets.ubuntu.com/v1/581ff61d-ceph-chart-2.png", alt="", @@ -94,32 +94,30 @@

What is a Ceph cluster?

A Ceph storage cluster consists of the following types of daemons:

-
-
    -
  • - Cluster monitors (ceph-mon) that maintain the map of the cluster state, keeping track of active and failed cluster nodes, cluster configuration, and information about data placement and manage authentication. -
  • -
  • - Managers (ceph-mgr) that maintain cluster runtime metrics, enable dashboarding capabilities, and provide an interface to external monitoring systems. -
  • -
  • - Object storage devices (ceph-osd) that store data in the Ceph cluster and handle data replication, erasure coding, recovery, and rebalancing. Conceptually, an OSD can be thought of as a slice of CPU/RAM and the underlying SSD or HDD. -
  • -
  • - Rados Gateways (ceph-rgw) that provide object storage APIs (swift and S3) via http/https. -
  • -
  • - Metadata servers (ceph-mds) that store metadata for the Ceph File System, mapping filenames and directories of the file system to RADOS objects and enabling the use of POSIX semantics to access the files. -
  • -
  • - iSCSI Gateways (ceph-iscsi) that provide iSCSI targets for traditional block storage workloads such as VMware or Windows Server. -
  • -
-
-

- Ceph stores data as objects within logical storage pools. A Ceph cluster can have multiple pools, each tuned to different performance or capacity use cases. In order to efficiently scale and handle rebalancing and recovery, Ceph shards the pools into placement groups (PGs). The CRUSH algorithm defines the placement group for storing an object and thereafter calculates which Ceph OSDs should store the placement group. -

-
+
    +
  • + Cluster monitors (ceph-mon) that maintain the map of the cluster state, keeping track of active and failed cluster nodes, cluster configuration, and information about data placement and manage authentication. +
  • +
  • + Managers (ceph-mgr) that maintain cluster runtime metrics, enable dashboarding capabilities, and provide an interface to external monitoring systems. +
  • +
  • + Object storage devices (ceph-osd) that store data in the Ceph cluster and handle data replication, erasure coding, recovery, and rebalancing. Conceptually, an OSD can be thought of as a slice of CPU/RAM and the underlying SSD or HDD. +
  • +
  • + Rados Gateways (ceph-rgw) that provide object storage APIs (swift and S3) via http/https. +
  • +
  • + Metadata servers (ceph-mds) that store metadata for the Ceph File System, mapping filenames and directories of the file system to RADOS objects and enabling the use of POSIX semantics to access the files. +
  • +
  • + iSCSI Gateways (ceph-iscsi) that provide iSCSI targets for traditional block storage workloads such as VMware or Windows Server. +
  • +
+
+

+ Ceph stores data as objects within logical storage pools. A Ceph cluster can have multiple pools, each tuned to different performance or capacity use cases. In order to efficiently scale and handle rebalancing and recovery, Ceph shards the pools into placement groups (PGs). The CRUSH algorithm defines the placement group for storing an object and thereafter calculates which Ceph OSDs should store the placement group. +

From 74f2af649fe8bb33c6064b64cfa29ba21c9741b1 Mon Sep 17 00:00:00 2001 From: immortalcodes <21112002mj@gmail.com> Date: Fri, 29 Nov 2024 12:42:41 +0530 Subject: [PATCH 4/4] added alt text --- templates/ceph/shared/_ceph-users.html | 2 +- templates/ceph/what-is-ceph.html | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/templates/ceph/shared/_ceph-users.html b/templates/ceph/shared/_ceph-users.html index 76d3f56100c..ec523f1b1a7 100644 --- a/templates/ceph/shared/_ceph-users.html +++ b/templates/ceph/shared/_ceph-users.html @@ -7,7 +7,7 @@

Companies using Ceph

- There are multiple users of Ceph across a broad range of industries, from academia to telecommunications and cloud service providers. Ceph is particularly favoured for its flexibility, scalability, and robustness. + There are multiple users of Ceph across a broad range of industries, from academia to telecommunications and cloud service providers. Ceph is particularly favored for its flexibility, scalability, and robustness.

diff --git a/templates/ceph/what-is-ceph.html b/templates/ceph/what-is-ceph.html index d6a2fc3d9d8..7d0e1bb07fe 100644 --- a/templates/ceph/what-is-ceph.html +++ b/templates/ceph/what-is-ceph.html @@ -50,7 +50,7 @@

Production-worthy Ceph storage

{{ image(url="https://assets.ubuntu.com/v1/3522db7e-ceph-chart-1.png", - alt="", + alt="Diagram illustrating how Ceph supports storage needs for Applications, VMware, Openstack and Kubernetes", width="1800", height="1014", hi_def=True, @@ -80,7 +80,7 @@

What is a Ceph cluster?

{{ image(url="https://assets.ubuntu.com/v1/581ff61d-ceph-chart-2.png", - alt="", + alt="Diagram showing a sample Ceph cluster consisting of monitors, managers, Rados Gateways, metadata servers, iSCSI gateways and multiple object storage device nodes", width="2748", height="1145", hi_def=True, @@ -180,7 +180,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}
@@ -200,7 +200,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}
@@ -210,7 +210,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}
@@ -220,7 +220,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}
@@ -230,7 +230,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}
@@ -240,7 +240,7 @@

Community and governance

height="313", hi_def=True, attrs={"class": "p-logo-section__logo"}, - loading="lazy",) | safe + loading="lazy") | safe }}