-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): openai compliant annotations with additional metadata #870
Conversation
gphorvath
commented
Aug 2, 2024
•
edited
Loading
edited
- Adds vector_content ids to metadata returned by runs endpoints
- Adds an endpoint for querying vector_content to leapfrogai namespace - which contains chunk text
✅ Deploy Preview for leapfrogai-docs canceled.
|
The annotations were not quite working with a simplistic test file that just said "Testing", so I had to augment it a little with some text data, and added a correlated question to the test file. Perhaps there's better data we can use for this purpose. |
I created a conformance test method that will evaluate the same feature in a separate call to OpenAI and LeapfrogAI: This method tries to evaluate Annotations returned from an Assistant query, but will inherently also evaluate the features that build the Assistant in the process:
We should set these up as separate tests on their own to isolate errors. Presently, the OpenAI call succeeds and returns the proper type structure. LeapfrogAI call fails with a number of issues related to the structure of the message object in the parameter:
This fails on call to
|
I have moved conformance testing for individual components to another PR and removed the code from here. What's left should be a more minimal test focusing on conformance of file annotations. Some issues that were identified via testing:
|
This is an issue with runs not file annotations
TODO: |
…-featapi-return-more-annotation-details
Migrate Request types for run.create methods to new types module
…-featapi-return-more-annotation-details