Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] spans are not properly nested when using multiple auto instrumentors #1103

Open
Parker-Stafford opened this issue Nov 6, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@Parker-Stafford
Copy link
Contributor

Parker-Stafford commented Nov 6, 2024

Describe the bug
When using multiple auto instrumentors, spans from distinct auto-instrumentors show up in different traces

To Reproduce

  • use langchain and openai auto instrumentors in the same application
    Expected behavior
  • the openai spans should be nested in tthe same trace as langchain spans

Additional context
See details below specifically on difficulties with this surrounding langchain
See comments here #1062 (comment)

@Parker-Stafford Parker-Stafford added bug Something isn't working triage Issues that require triage labels Nov 6, 2024
@Parker-Stafford Parker-Stafford self-assigned this Nov 6, 2024
@github-project-automation github-project-automation bot moved this to 📘 Todo in phoenix Nov 6, 2024
@anuraaga
Copy link

anuraaga commented Nov 7, 2024

Hello - I'm working with @codefromthecrypt on using langchain telemetry. I filed this discussion in langchain about allowing context propagation within callback handlers which I think is required for a proper approach to fix this, feel free to chime in if appropriate.

langchain-ai/langchain#27954

In the meantime, it would be possible to autoinstrument entry points such as BaseLLM to activate context via run_id, but ideally the library provided a cleaner mechanism for it.

@Parker-Stafford
Copy link
Contributor Author

Hello - I'm working with @codefromthecrypt on using langchain telemetry. I filed this discussion in langchain about allowing context propagation within callback handlers which I think is required for a proper approach to fix this, feel free to chime in if appropriate.

langchain-ai/langchain#27954

In the meantime, it would be possible to autoinstrument entry points such as BaseLLM to activate context via run_id, but ideally the library provided a cleaner mechanism for it.

Hey @anuraaga yeah I started taking a look at this yesterday, was trivial to get nesting correct in openai so look for that soon re issue #1061. But langchain, as you mentioned is not. We currently don't instrument any of the entry points or methods, because there are so many and they are changing all the time. But instead hook into the callback handlers like you mentioned. This make context propagation non trivial. Thanks for starting the discussion with them, will follow along in the thread and see if we can get a fix in when they support it.

@Parker-Stafford Parker-Stafford removed the triage Issues that require triage label Nov 7, 2024
@Parker-Stafford
Copy link
Contributor Author

Note this is partially resolved in #1121 will keep this open for tracking any changes on the langchain front. For future context, using a proxy I was able to get properly get context and wrap original function calls with context.with however, since generate requests to llms are queued in langchain and not executed till later the context was lost by the time openai (for example) was actually called.

@Parker-Stafford Parker-Stafford moved this from 🔍. Needs Review to 📘 Todo in phoenix Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: 📘 Todo
Development

No branches or pull requests

2 participants