Skip to content

Topic Modeling with Llama 2 - Example Prompt and Few-Shot Learning? #1609

Answered by MaartenGr
linxule asked this question in Q&A
Discussion options

You must be logged in to vote

Alright, it seems there are snippets of code missing here and there but I think I get the general gist of what you are trying to achieve. This might simply be a result of the prompt itself. With the one-shot approach here, it might be that the LLM "overfits" on that single example. When it does not know a good response or suffers from "lost in the middle syndrome", it might default to what it has seen first, which is the example.

Instead, I can recommend the following approach with Zephyr which will be in the documentation soon. Here, the prompt might be of use to you but if you want to use it for Llama 2, make sure to use the chat template for Llama 2 instead.

Zephyr (Mistral 7B)

We can …

Replies: 2 comments 9 replies

Comment options

You must be logged in to vote
1 reply
@linxule
Comment options

Comment options

You must be logged in to vote
8 replies
@linxule
Comment options

@MaartenGr
Comment options

@linxule
Comment options

@linxule
Comment options

@MaartenGr
Comment options

Answer selected by linxule
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants