Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement tracing for pyroscope components #2185

Closed

Conversation

madaraszg-tulip
Copy link
Contributor

@madaraszg-tulip madaraszg-tulip commented Nov 28, 2024

PR Description

Implement tracing for the pyroscope.scrape, pyroscope.receive_http and pyroscope.write components.

Spans are created for http server and http client calls in these components.

PR Checklist

  • CHANGELOG.md updated
  • Documentation added
  • Tests updated
  • Config converters updated

@madaraszg-tulip madaraszg-tulip requested review from a team as code owners November 28, 2024 14:11
@CLAassistant
Copy link

CLAassistant commented Nov 28, 2024

CLA assistant check
All committers have signed the CLA.

@@ -228,6 +241,9 @@ func (t *scrapeLoop) scrape() {
)
defer cancel()

scrapeCtx, span := t.tracer.Tracer("").Start(scrapeCtx, "scrape")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming spans is the most difficult thing. This is the only span I create explicitly in this PR, so there's not much naming convention to do, but it would be good to have one.

@madaraszg-tulip
Copy link
Contributor Author

New spans:

image
image

@madaraszg-tulip
Copy link
Contributor Author

I'm seeing segfaults during reload related to tracing, will need to investigate

Copy link
Contributor

@ptodev ptodev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thank you for the PR! I do wonder to what extent traces will be useful here. Tracing is mostly handy when there is a chain of network calls - one service calling another, calling another, calling another, etc. But with the profiling calls there are only two services involved in every call. The only use case I can think of is spanmetrics generation from the traces?

@madaraszg-tulip
Copy link
Contributor Author

Hi, thank you for the PR! I do wonder to what extent traces will be useful here. Tracing is mostly handy when there is a chain of network calls - one service calling another, calling another, calling another, etc. But with the profiling calls there are only two services involved in every call. The only use case I can think of is spanmetrics generation from the traces?

I implemented this to debug the root cause of the issue that eventually resulted in #2201. It was very useful to have traces to see what is actually happening. Btw there are more services involved - the client, alloy, traefik, pyroscope-distributor, pyroscope-ingester.

@marcsanmi
Copy link
Contributor

Hi @madaraszg-tulip,

Thank you for your contribution and for taking the time to enhance observability in Alloy.

My two cents: while the instrumentation could be valuable, I believe we should align with Alloy's observability choices for now. If they decide to adopt otelhttp in the future, we'd be happy to revisit this kind of instrumentation. However, I'll leave the final decision to someone from Alloy.

Thanks again for your contribution!

@madaraszg-tulip
Copy link
Contributor Author

Closing for now, might revisit in the future

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants