Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of caching in carbonzipper together with sendGlobsAsIs in carbonapi leads to "timeout waiting for more responses" #53

Open
antonsoroko opened this issue Sep 15, 2017 · 1 comment

Comments

@antonsoroko
Copy link

Hi!

We have faced the following issue:
if we enable sendGlobsAsIs in carbonapi and set expireDelaySec>0 in carbonzipper then request with a relatively large number of metrics (about 65) fails in carbonzipper:

2017-09-11T16:16:56.296Z        INFO    access  request served  {"handler": "find", "format": "protobuf", "target": "dc1.collectd.deployment.some_server.*.cpu.*.percent.*", "carbonzipper_uuid": "3af6bea0-b0d6-4a06-92ae-b8bef43d4058", "carbonapi_uuid": "40f0310b-95e6-4ffc-8a56-42d5ead76677", "http_code": 200, "runtime_seconds": 0.003241923}

2017-09-11T16:17:01.315Z        WARN    render  timeout waiting for more responses      {"memory_usage_bytes": 0, "handler": "render", "carbonzipper_uuid": "ae19164a-3f97-48a5-8cc7-3807818944f1", "carbonapi_uuid": "40f0310b-95e6-4ffc-8a56-42d5ead76677", "handler": "multiGet", "uri": "/render/?format=protobuf&from=1504973816&target=dc1.collectd.deployment.some_server.%2A.cpu.%2A.percent.%2A&until=1505146616", "timeouted_servers": ["http://192.168.113.70:8083", "http://192.168.113.85:8083", ...], "answers_from_servers": []}

2017-09-11T16:17:01.315Z        ERROR   access  request failed  {"handler": "render", "carbonzipper_uuid": "ae19164a-3f97-48a5-8cc7-3807818944f1", "carbonapi_uuid": "40f0310b-95e6-4ffc-8a56-42d5ead76677", "format": "protobuf", "target": "dc1.collectd.deployment.some_server.*.cpu.*.percent.*", "memory_usage_bytes": 0, "reason": "No responses fetched from upstream", "http_code": 500, "runtime_seconds": 5.018973257}

2017-09-11T16:17:01.315Z        WARN    slow    Slow Request    {"time": 5.019006852, "url": "/render/?format=protobuf&from=1504973816&target=dc1.collectd.deployment.some_server.%2A.cpu.%2A.percent.%2A&until=1505146616"}

...
2017-09-11T16:17:01.335Z        ERROR   render  error reading body      {"memory_usage_bytes": 0, "handler": "render", "carbonzipper_uuid": "ae19164a-3f97-48a5-8cc7-3807818944f1", "carbonapi_uuid": "40f0310b-95e6-4ffc-8a56-42d5ead76677", "handler": "multiGet", "handler": "singleGet", "query": "http://192.168.113.85:8083//render/?format=protobuf&from=1504973816&target=dc1.collectd.deployment.some_server.%2A.cpu.%2A.percent.%2A&until=1505146616", "error": "context canceled"}
...

(afterStarted: "5s" in this example)

If number of metrics more than maxBatchSize (in carbonapi) - everything is fine and this make sense.

If we set expireDelaySec: 0 (with enabled sendGlobsAsIs) then carbonzipper works fine:

2017-09-11T16:17:25.266Z        INFO    access  request served  {"handler": "find", "format": "protobuf", "target": "dc1.collectd.deployment.some_server.*.cpu.*.percent.*", "carbonzipper_uuid": "cdf92d40-0e09-4630-b83d-c21234f6087f", "carbonapi_uuid": "0882c166-6878-4cd6-a670-eeadd962fc64", "http_code": 200, "runtime_seconds": 0.002556903}

2017-09-11T16:17:25.819Z        INFO    access  request served  {"handler": "render", "carbonzipper_uuid": "698b8799-8f99-4061-a4d4-1e4f86c0a70f", "carbonapi_uuid": "0882c166-6878-4cd6-a670-eeadd962fc64", "format": "protobuf", "target": "dc1.collectd.deployment.some_server.*.cpu.*.percent.*", "memory_usage_bytes": 9959448, "http_code": 200, "runtime_seconds": 0.552167464}

And as you can see runtime_seconds is relatively small.

Could you please help with this issue?

Thanks in advance.

@dreddynarahari
Copy link

Is there any update on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants