-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor/move-prompts-to-jinja-templates #2164
base: main
Are you sure you want to change the base?
refactor/move-prompts-to-jinja-templates #2164
Conversation
{follow_up_questions_prompt} | ||
{injected_prompt} | ||
|
||
"You are an intelligent assistant helping analyze the Annual Financial Report of Contoso Ltd., The documents contain text, graphs, tables and images. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks incorrect? I think you can remove lines 12 and onwards?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're absolutely right. I originally pasted this here to compare the prompts but forgot to remove it afterward.
@@ -0,0 +1,18 @@ | |||
[ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I hadn't considered making the few shots a Jinja as well. Do you imagine passing template variables into it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes more sense indeed. I converted it to .jinja initially to stay consistent with the file extensions in the prompts folder. I've now updated these to .json.
Use 'you' to refer to the individual asking the questions even if they ask with 'I'. | ||
Answer the following question using only the data provided in the sources below. | ||
Each source has a name followed by colon and the actual information, always include the source name for each fact you use in the response. | ||
If you cannot answer using the sources below, say you don't know. Use below example to answer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you install pre-commit, it should fix the new lines. See CONTRIBUTING.md for installation instructions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely sure what you mean here. Could you clarify?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice that several files don't have newlines at the ends of the file, which usually means that the pre-commit hasn't run, as the pre-commit hooks fix that issue (and others). There are instructions for installing pre-commit hooks here:
https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/CONTRIBUTING.md#setting-up-the-development-environment
""" | ||
def __init__(self): | ||
self._initialize_templates() | ||
self.NO_RESPONSE = "0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NO_RESPONSE can stay as a class variable, no? There's no particular need for it to be an instance variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I reverted the change.
self.system_chat_template = self.env.get_template('system_message.jinja').render() | ||
self.few_shots = self._process_few_shots() | ||
|
||
def _process_few_shots(self) -> Any: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the amount of processing here, I'd be inclined to leave the few shots as .json, but inside the prompts folder, and just load with json package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I’ve converted the few_shots examples to JSON.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! My main suggestion is that we keep few shots as JSON files, while still keeping the nice organization into separate folder.
I’ve incorporated the above comments in the latest commit. Thanks for taking the time to review! |
@jeannotdamoiseaux Thanks for the changes! CCing recent contributors: |
My suggestion would be to move all prompts to a separate folder using Jinja files first to create a clearer overview and better structure. Once that’s in place, we could consider using Prompty later if it aligns well with the project's needs. For now, though, I don’t see this as something that would significantly push my project forward. |
Thanks for the PR! Based on the high Package Health Score of Jinja2, I favor sticking with it over Prompty, which has a lower score (details here). That said, I’m open to hearing more about specific cases where Prompty might offer clear advantages. |
I understand that Jinja2 and Prompty are being compared as approaches to organizing prompts. Jinja2 has a proven track record as a template engine for Flask, Django, etc., and I think it is a very good option. This decision is very difficult. I think Jinja2 is sufficient as an approach to achieve the goal of organizing the current prompts. If I were forced to make this choice for my own production project, I would definitely choose Jinja2. However, considering that this project is "Azure Samples", I think it is better to be adventurous, so I will vote for Prompty. |
I suggest Prompty because it specializes in AI prompt creation and management. It offers structured workflows, better visibility, making debugging and evaluation (prompt evaluation and ab testing easier) While migrating to it might take some effort, its focus on flexibility and portability makes it a great fit for our AI needs. Jinja2 was designed for general templating, but Prompty is better designed for AI-specific tasks. Thanks. |
Purpose
This pull request addresses the request made by @pamelafox in #2123 to refactor the code for improved prompt management.
All prompts are now stored in a separate folder using .jinja files. This makes it easier to manage and update prompts across the application.
Does this introduce a breaking change?
Does this require changes to learn.microsoft.com docs?
This repository is referenced by this tutorial
which includes deployment, settings and usage instructions. If text or screenshot need to change in the tutorial,
check the box below and notify the tutorial author. A Microsoft employee can do this for you if you're an external contributor.
Type of change
Code quality checklist
See CONTRIBUTING.md for more details.
python -m pytest
).python -m pytest --cov
to verify 100% coverage of added linespython -m mypy
to check for type errorsruff
andblack
manually on my code.Additional Notes
This refactoring aims to improve the overall code structure without changing functionality. It should make the codebase more maintainable and easier to work with for future development.