Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle osd disk failure? #209

Open
krenakzsolt opened this issue Aug 13, 2015 · 1 comment
Open

How to handle osd disk failure? #209

krenakzsolt opened this issue Aug 13, 2015 · 1 comment

Comments

@krenakzsolt
Copy link

Hi All!

I was thinking about how the coobook handles disk failures. What would be an operational procedure in case of an OSD disk dying with this cookbook? Has anyone have experience about this? Thanks in advance!

@mdsteveb
Copy link

Me too, for this and other operational scenarios that people might have run across in real life.

For this case my guess is that you would let Ceph remove the OSD from the cluster, remove its entry from the node's osd list, replace the disk, then add a new entry for the new disk in the node's osd list. (Please correct me if I missed something!)

But, it would be nice to know if there was an easy way to do this without going through rebalancing (twice), for example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants