-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pgremapper performance limit? #52
Comments
Were any PGs unclean at the time? pgremapper will issue quite a few extra commands in that case, and would be limited in performance by command round-tripping. Increasing If you run with |
It was thinking. Everything I said concerns not what the |
Hmm, OK. The OSD and PG counts are consistent with some systems we've tested on in the past, and I don't think the rack/host count should affect cancel-backfill. Reviewing the code, I see that we do run the PG calculations in parallel, controlled by the concurrency setting, but depending on how much of the computation time is spent in |
Oh, interesting, that appears to be spending a bunch of time in garbage collection. Maybe there are things we can do to be more memory-allocation-efficient here. |
We would be happy to test the patch if it is possible to do something in this place. According to my observations, the |
Apologies - this is still on our list but it hasn't been able to bubble to the top yet |
Hi, I find the cluster, where
cancel-backfill
is not super fast as usual (took minutes)1423 osds | 37920 pgs - not super big. But have 38 rack buckets with 67 hosts
What I see is that
pgremapper
use only ~ 3 cores of 24 cores CPU, and that's suspicious (low) for golang. May be this some limit in code or on compiler? Or scalability milestone for an application?The text was updated successfully, but these errors were encountered: