Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Anthropic tokens; error handling; various bugfixes and refactorings #3450

Merged
merged 12 commits into from
Nov 28, 2024

Conversation

berekuk
Copy link
Collaborator

@berekuk berekuk commented Nov 21, 2024

Custom token field:

Screenshot 2024-11-21 at 17 05 12

(Doesn't look great, I think it's time to collapse settings in an accordion, but that's out of scope of this PR)


Tooltips for failed steps and error messages:

Screenshot 2024-11-21 at 17 05 42

Renamed "Actions" to "Steps", for consistency (I think "Actions" could be confusing for end users, and we already use "Steps" in the sidebar):

Screenshot 2024-11-21 at 17 05 51

Weird small feature: /admin/dev page in dev mode that allows to disable Prisma logs on the running server; doesn't work great but better than nothing when you want to read AI logs and Prisma logs get in the way:

Screenshot 2024-11-21 at 17 07 19

LLMClient changes:

  • extracted OpenAI-specific and Anthropic-specific code into separate classes
  • implemented LLMError class that allows to distinguish between "timeout" errors and "out of credits" errors

Error handling:

  • queryLLM now always returns a completion, or fails when something went wrong
  • more errors are critical, afaict it shouldn't prevent any runs that could succeed on retries

Streaming, ClientWorkflow and events refactorings:

  • I've removed stepStarted event, we update the DB on stepAdded events now (minor difference but fixes the issue with in-progress status)
  • also removed clientWorkflow.currentStep, could be inferred from the last step, and this fixes one bug where we showed step id instead of step name in the sidebar

@berekuk berekuk requested a review from OAGr as a code owner November 21, 2024 20:10
Copy link

changeset-bot bot commented Nov 21, 2024

⚠️ No Changeset found

Latest commit: 9323a7a

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link

vercel bot commented Nov 21, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
quri-hub ✅ Ready (Inspect) Visit Preview Nov 25, 2024 9:49pm
squiggle-website ✅ Ready (Inspect) Visit Preview Nov 25, 2024 9:49pm
2 Skipped Deployments
Name Status Preview Updated (UTC)
quri-ui ⬜️ Ignored (Inspect) Visit Preview Nov 25, 2024 9:49pm
squiggle-components ⬜️ Ignored (Inspect) Visit Preview Nov 25, 2024 9:49pm

@OAGr
Copy link
Contributor

OAGr commented Nov 25, 2024

This is failing tests

@@ -0,0 +1,97 @@
import Anthropic from "@anthropic-ai/sdk";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasonable move, to split this up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said, this does make it tricky to determine what code was added / changed, of this part.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC only LLMError class is new, everything else in LLMClient/ is old.

And even that class is used only for timeout errors (specific LLM providers don't try to normalize vendor lib errors to LLMError instances, which works because everything that's not an LLMError is treated as critical, which is what we want, for now).

@berekuk
Copy link
Collaborator Author

berekuk commented Nov 25, 2024

This is failing tests

Oops, yeah, that's because of f84baa1 that I noticed and fixed but didn't backport here (Vercel started to default to Node v22, probably recently).

@berekuk berekuk deleted the custom-ai-token branch November 28, 2024 18:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants