Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added system prompt for openai #2145

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

Leo-LiHao
Copy link

@Leo-LiHao Leo-LiHao commented Sep 12, 2024

What does this PR do?

Add system prompt for OpenAI Representation model, see Issue #2146

Before submitting

  • This PR fixes a typo or improves the docs (if yes, ignore all other checks!).
  • Did you read the contributor guideline?
  • Was this discussed/approved via a Github issue? Please add a link to it if that's the case.
  • Did you make sure to update the documentation with your changes (if applicable)?
  • Did you write any new necessary tests?

Copy link
Owner

@MaartenGr MaartenGr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating these two models! I left a small comment.

Other than that, I believe we will need to look at the following:

With HuggingFace, we might need to change the formatting such that we use messages instead:

messages = [
    {
        "role": "system",
        "content": MY_SYSTEM_PROMPT,
    },
    {"role": "user", "content": prompt},
 ]

For LangChain, I'm okay with skipping this for now since it uses a more complex structure and the API here might need to be updated (see https://python.langchain.com/v0.1/docs/modules/model_io/chat/quick_start/#messages-in---message-out).

For llama-cpp-python, it seems straightforward and we should use .create_chat_completion instead of directly calling the model.

Topic name:"""
Provide the topic name directly without any explanation."""
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you make this change? In my experience, by providing the model with a pre-fix, there is no need for mentioning that it should provide the topic name without any explanation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on my experiments, the pre-fix prompt works well for completion models but not with (small) chat models. You can try different prompts locally to see the difference and feel free to change/iterate the prompt.

@Leo-LiHao
Copy link
Author

@MaartenGr I updated the code for llama-cpp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants