-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output containing messages that guided to prompt fallbacks #195
Comments
Nice idea! Langsmith is a nice platform to debug each interaction with LLMs, but it's limited to langchain's chain calls, and indeed dialog's fallback implementation occurs before these calls, when no relevant documents are found from the retriever. Currently, this idea of saving fallbacks would probably occur optionally in the prompt generation step (e.g generate_prompt method of |
What would the implementation look like in your POV @llemonS ? |
Well, considering the concept of knowledge base (.csv file) being used as an input and taking a look at the project structure, maybe we could create a path dialog/data/output to store the messages that guided to prompt fallbacks in order to enable a sort of feedback cycle for the own user adapt into the input .csv file later on. |
Recently, I've managed to use fallback in a more LCEL way, making it a runnable component as well, so it's trace goes to langsmith and we gain the information @llemonS talked about "for free", with all other langsmith benefits. It may help not only this issue but also making the chain fully LCEL adherent. In the first print screenshot, the rag chain example received a question outside its designed context (see second screenshot), and a chain_router (python function with rules to route, in this case check the number of documents returned from the retriever) guided the chain to a FallbackChain, which imposes the fixed AI's fallback message you see in the first screenshot. This can be achieved with a more complex LCEL chain, which I've adapted from the dialog's plugin I work with to this:
|
In order to increase the knowledge base for a specific subject over time, would be interesting if we could collect the messages that guided to the prompt fallbacks for later analysis. Maybe initially creating an output folder with a csv file containing those scenareos.
The text was updated successfully, but these errors were encountered: