Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very high memory consumption #47

Open
claudio-benfatto opened this issue May 26, 2017 · 3 comments
Open

Very high memory consumption #47

claudio-benfatto opened this issue May 26, 2017 · 3 comments

Comments

@claudio-benfatto
Copy link

Hello here,

we are using the carbonzipper version 0.7.2, in combination with the carbonapi (0.8.0) and go-carbon (0.9.1).

Our cluster is constituted by 3 nodes proxied by the carbon-c-relay.

We recently upgraded from an old version of the stack, and since then we are seeing a very high memory consumption pattern:

pmap shows total 44201100K, and top around 25% usage on a very beefy server.

This is our current configuration:

carbonzipper

maxProcs: 16

timeouts:
    global: "10s"
    afterStarted: "2s"

concurrencyLimit: 0
maxIdleConnsPerHost: 100
expireDelaySec: 10

carbonapi

concurency: 20
cache:
   type: "mem"
   size_mb: 1024
   defaultTimeoutSec: 60

cpus: 2
tz: ""

sendGlobsAsIs: true

maxBatchSize: 1000

any suggestion on how to best approach the problem?

Many thanks!

@dgryski
Copy link
Member

dgryski commented May 26, 2017

Reducing memory consumption is one thing we're working on at the moment. Reducing the maxBatchSize will help, as that will turn some very large requests that fetch a huge amount of data into a set of smaller requests that can be more easily garbage collected with similar huge spike.

@claudio-benfatto
Copy link
Author

Thanks for the prompt answer @dgryski , we'll tune the settings and see if it helps.

Just a question, any reason in particular why the previous versions were not showing this issue? Or at least not up to this magnitude...

Has the caching strategy changed for the carbonzipper process?

Thanks again!

@dgryski
Copy link
Member

dgryski commented May 26, 2017

The sendGlobsAsIs and maxBatchSize are new and still subject to tuning. It helps query latency in the majority of cases, but the down size is that it can occasionally massively increase memory usage :/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants