Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph user story #359

Open
q3k opened this issue Oct 23, 2024 · 0 comments
Open

Ceph user story #359

q3k opened this issue Oct 23, 2024 · 0 comments

Comments

@q3k
Copy link
Contributor

q3k commented Oct 23, 2024

As a user I would like to run Ceph on my Metropolis cluster. I'm fine with either Metropolis providing a high-level API (eg. operator, gRPC call) for this, or with some set of manifest files I can base my deployment on.

As a developer, this probably means we would need:

  1. Some way to provision OSDs, either deferring to ceph-volume prepare or replicate that work.
  2. Some way to start OSDs, giving it access to block devices. Then the OSD pods need to either be able to run ceph-volume activate, or some automation should give it directory layouts that are immediately usable with ceph-osd.
  3. Some way to define a storage class that is backed by some ceph mons and FSID.
  4. Some way to define PVCs pointing to that storage class and an operator that responds to this and actually provisions/rbd maps them.
  5. CephFS support for the above.

Tangential discussion: can we make Ceph fast? See discussion about capacitors, transactions, queues, flushing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant