You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
As a first timer, I tried the openai instrumentation, and sent a trace to a local collector (using ollama as the backend). Then I compared the output with llm semantics defined by otel. I noticed some incompatibilities and some attributes not yet defined.
compatible:
none
incompatible:
kind=internal (should be client)
name=ChatCompletion (should be 'chat codegemma:2b-code')
attributes['llm.input_messages.0.message.content']='<|fim_prefix|>def hello_world():<|fim_suffix|><|fim_middle|>' (should be the event attribute gen_ai.prompt)
attributes['llm.output_messages.0.message.content']='print("hello world")' (should be the event attribute gen_ai.completion)
attributes['llm.token_count.prompt']=24 (should be 'gen_ai.usage.input_tokens')
attributes['llm.token_count.completion']=11 (should be 'gen_ai.usage.output_tokens')
attributes['llm.model_name']='codegemma:2b-code' (should be the the span attribute 'gen_ai.request.model')
importosfromopenaiimportOpenAIfromopeninference.instrumentation.openaiimportOpenAIInstrumentorfromopentelemetryimporttracefromopentelemetry.sdk.traceimportTracerProviderfromopentelemetry.sdk.trace.exportimportSimpleSpanProcessorfromopentelemetry.exporter.otlp.proto.http.trace_exporterimportOTLPSpanExporterfromopentelemetry.sdk.resourcesimportResourcedefinitialize_tracer():
# Set the service name such that it is different from other experimentsresource=Resource(attributes={'service.name': 'openinference-python-ollama'})
trace.set_tracer_provider(TracerProvider(resource=resource))
# Default the standard ENV variable to localhostotlp_endpoint=os.getenv('OTEL_EXPORTER_OTLP_TRACES_ENDPOINT', 'http://localhost:4318/v1/traces')
otlp_exporter=OTLPSpanExporter(endpoint=otlp_endpoint)
# Don't batch spans, as this is a demotrace.get_tracer_provider().add_span_processor(SimpleSpanProcessor(otlp_exporter))
definstrument_openai():
OpenAIInstrumentor().instrument()
defchat_with_ollama():
ollama_host=os.getenv('OLLAMA_HOST', 'localhost')
# Use the OpenAI endpoint, not the Ollama API.base_url='http://'+ollama_host+':11434/v1'client=OpenAI(base_url=base_url, api_key='unused')
messages= [
{
'role': 'user',
'content': '<|fim_prefix|>def hello_world():<|fim_suffix|><|fim_middle|>',
},
]
chat_completion=client.chat.completions.create(model='codegemma:2b-code', messages=messages)
print(chat_completion.choices[0].message.content)
defmain():
initialize_tracer()
instrument_openai()
chat_with_ollama()
if__name__=='__main__':
main()
Expected behavior
I would expect semantics to extend, not clash with otel LLM ones. Ack this is a moving target as the otel ones change frequently.
Desktop (please complete the following information):
OS: [e.g. iOS]
Version [e.g. 22]
Additional context
The otel semantics are defined by "Semantic Conventions: LLM" and folks doing the changes there are frequently on the slack channel, in case you have any questions. I personally haven't yet made any changes to the LLM semantics.
Hey @codefromthecrypt, thanks for this and good callout. This project actually pre-dates the gen_ai experimental conventions and as such you are right, they do not align. We are planning on getting involved with the working groups that are making progress and reconciling the differences with our conventions. Thanks for the very helpful links - appreciate it.
Describe the bug
As a first timer, I tried the openai instrumentation, and sent a trace to a local collector (using ollama as the backend). Then I compared the output with llm semantics defined by otel. I noticed some incompatibilities and some attributes not yet defined.
compatible:
none
incompatible:
gen_ai.prompt
)gen_ai.completion
)not yet defined in the standard:
To Reproduce
You can use a program like this:
Expected behavior
I would expect semantics to extend, not clash with otel LLM ones. Ack this is a moving target as the otel ones change frequently.
Screenshots
Example collector log
Desktop (please complete the following information):
Additional context
The otel semantics are defined by "Semantic Conventions: LLM" and folks doing the changes there are frequently on the slack channel, in case you have any questions. I personally haven't yet made any changes to the LLM semantics.
https://github.com/open-telemetry/community?tab=readme-ov-file#specification-sigs
The text was updated successfully, but these errors were encountered: