Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: llama-index instrumentor using callback handler #121

Merged
merged 17 commits into from
Jan 22, 2024

Conversation

RogerHYang
Copy link
Contributor

@RogerHYang RogerHYang commented Jan 19, 2024

resolves Arize-ai/phoenix#1957

based on the callback handler from Phoenix

@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Jan 19, 2024

resource = Resource(attributes={})
tracer_provider = trace_sdk.TracerProvider(resource=resource)
span_exporter = OTLPSpanExporter(endpoint="http://127.0.0.1:6006/v1/traces")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a simple way for the user to see the contents of the exported trace or span in this example?

Copy link
Contributor Author

@RogerHYang RogerHYang Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's the in-memory-exporter. but that's kind of advanced usage. On the other hand the examples are meant to demonstrate phoenix as a collector.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no Phoenix instance running as a collector in these examples, though, and they should probably be independent of Phoenix.

Small nit, would just be nice if there is a simple collector we could run to display the spans as they come in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. probably worth a group discussion on how to do that

…ex/src/openinference/instrumentation/llama_index/_callback.py

Co-authored-by: Xander Song <[email protected]>
Copy link
Contributor

@axiomofjoy axiomofjoy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job, @RogerHYang. This was a big one.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jan 19, 2024
@axiomofjoy
Copy link
Contributor

Do we have type checking turned on in the unit tests? That would be nice.

Copy link
Contributor

@anticorrelator anticorrelator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@RogerHYang RogerHYang merged commit b0734c1 into main Jan 22, 2024
3 checks passed
@RogerHYang RogerHYang deleted the llama-index-instrumentor branch January 22, 2024 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:XXL This PR changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[otel] switch to OTEL tracer for llama_index
3 participants