Skip to content

Commit

Permalink
Improvements to the experiment tutor/chat interface.
Browse files Browse the repository at this point in the history
  • Loading branch information
liffiton committed Jun 24, 2024
1 parent f31f969 commit e4376ef
Show file tree
Hide file tree
Showing 4 changed files with 80 additions and 40 deletions.
43 changes: 43 additions & 0 deletions src/codehelp/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,3 +128,46 @@ def make_topics_prompt(language: str, code: str, error: str, issue: str, respons
]

return messages


chat_template_sys = jinja_env.from_string("""\
You are an AI tutor specializing in programming and computer science. Your role is to assist students who are seeking help with their coursework or projects, but you must do so in a way that promotes learning and doesn't provide direct solutions. Here are your guidelines:
1. Always maintain a supportive and encouraging tone.
2. Never provide complete code solutions or direct answers that would rob the student of the learning experience.
3. Focus on guiding the student towards understanding concepts and problem-solving strategies.
4. Use the Socratic method by asking probing questions to help students think through problems.
5. Provide hints, explanations of relevant concepts, and suggestions for resources when appropriate.
6. Encourage good coding practices.
When a student asks a question, follow this process:
1. Analyze the question to identify the core concept or problem the student is struggling with.
2. Consider what foundational knowledge the student might be missing.
3. Think about how you can guide the student towards the solution without giving it away.
4. In your conversation, include:
a. Clarifying questions (as needed)
b. Explanations of relevant concepts
c. Hints or suggestions to guide their thinking
d. Encouragement to attempt the problem themselves
5. This is a back-and-forth conversation, so just ask a single question in each message. Wait for the answer to a given question before asking another.
6. Use markdown formatting, including ` for inline code.
Do not provide direct solutions or complete code snippets. Instead, focus on guiding the student's learning process.
The topic of this chat from the student is: <topic>{{ topic }}</topic>
If the topic is broad and it could take more than one chat session to cover all aspects of it, first ask the student to clarify what, specifically, they are attempting to learn about it.
{% if context %}
Additional context that may be relevant to this chat:
<context>
{{ context }}
</context>
{% endif %}
""")

tutor_monologue = """<internal_monologue>I am a Socratic tutor. I am trying to help the user learn a topic by leading them to understanding, not by telling them things directly. I should check to see how well the user understands each aspect of what I am teaching. But if I just ask them if they understand, they may say yes even if they don't, so I should NEVER ask if they understand something. Instead of asking "does that make sense?", I need to check their understanding by asking them a question that makes them demonstrate understanding. It should be a question for which they can only answer correctly if they understand the concept, and it should not be a question I've already given an answer for myself. If and only if they can apply the knowledge correctly, then I should move on to the next piece of information.</internal_monologue>"""

def make_chat_sys_prompt(topic: str, context: str) -> str:
return chat_template_sys.render(topic=topic, context=context)
30 changes: 21 additions & 9 deletions src/codehelp/templates/tutor_chat_component.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,39 +8,51 @@
<style type="text/css">
.chat_grid {
display: grid;
grid-template-columns: 1fr 5fr;
grid-template-columns: 1fr 6fr;
row-gap: 0.5em;
}
.chat_role {
padding-right: 0.5em;
margin-right: 1em;
margin-bottom: 0.2em;
padding: 0.5em;
padding-left: 0;
font-style: italic;
font-weight: bold;
text-align: right;
}
.chat_role_assistant {
color: #358;
border-right: solid 3px #47a;
}
.chat_role_user {
color: #834;
border-right: solid 3px #a35;
}
.chat_message {
overflow-x: auto;
max-width: 50em;
padding: 0.5em;
padding-right: 0;
}
.chat_message_user {
border-left: solid 5px #a35;
background: #a351;
}
.chat_message_assistant {
border-left: solid 5px #47a;
}
</style>

<div class="m-3 chat_grid">
{% for message in chat_messages %}
<div class="chat_role chat_role_{{message['role']}}">
{{ 'You' if message['role'] == 'user' else 'Helper' }}
{{ 'You' if message['role'] == 'user' else 'Tutor' }}
</div>
<div class="content">
<div class="chat_message chat_message_{{message['role']}}">
{{message['content'] | markdown}}
</div>
{% endfor %}
{% if msg_input %}
<div class="chat_role chat_role_user">
You
</div>
<div>
<div class="chat_message chat_message_user" style="padding-right: 0.5em">
<div class="field has-addons">
<div class="control is-expanded">
<textarea class="textarea is-link" rows=2 name="message" placeholder="Send a message. Press enter to send, shift-enter for a new line." autofocus x-bind:disabled="loading" @keydown.enter="if (!$event.shiftKey) {$event.target.form.submit(); loading=true; $event.preventDefault();}"></textarea>
Expand Down
9 changes: 8 additions & 1 deletion src/codehelp/templates/tutor_view.html
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,14 @@
<div class="container content">
<h1 class="title">{{topic}}</h1>

<p><i>[showing context for debugging -- normally hidden]</i> {{context}}</p>
{% if context %}
<p>
<details>
<summary style="color: #888;">Context provided to tutor:</summary>
{{context | markdown}}
</details>
</p>
{% endif %}

{# debounce on the submit handler so that the form's actual submit fires *before* the form elements are disabled #}
<form action="{{url_for('tutor.new_message')}}" method="post" x-data="{loading: false}" x-on:submit.debounce.10ms="loading = true">
Expand Down
38 changes: 8 additions & 30 deletions src/codehelp/tutor.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@
from openai.types.chat import ChatCompletionMessageParam
from werkzeug.wrappers.response import Response

from . import prompts


class ChatNotFoundError(Exception):
pass
Expand Down Expand Up @@ -184,7 +186,7 @@ def run_chat_round(llm_dict: LLMDict, chat_id: int, message: str|None = None) ->
except (ChatNotFoundError, AccessDeniedError):
return

# Add the given message(s) to the chat
# Add the new message to the chat
if message is not None:
chat.append({
'role': 'user',
Expand All @@ -194,36 +196,12 @@ def run_chat_round(llm_dict: LLMDict, chat_id: int, message: str|None = None) ->
save_chat(chat_id, chat)

# Get a response (completion) from the API using an expanded version of the chat messages
# Insert an opening "from" the user and an internal monologue to guide the assistant before generating it's actual response
opening_msg = """\
You are a Socratic tutor for helping me learn about a computer science topic. The topic is given in the previous message.
If the topic is broad and it could take more than one chat session to cover all aspects of it, please ask me to clarify what, specifically, I'm attempting to learn about it.
I will not understand a lot of detail at once, so I need you to carefully add a small amount at a time. I don't want you to just tell me how something works directly, but rather start by asking me about what I do know and prompting me from there to help me develop my understanding. Before moving on, always ask me to answer a question or solve a problem with these characteristics:
- Answering correctly requires understanding the current topic well.
- The answer is not found in what you have told me.
- I can reasonably be expected to answer correctly given what I seem to know so far.
"""
context_msg = f"I have this additional context about teaching the user this topic:\n\n{context}"
monologue = """[Internal monologue] I am a Socratic tutor. I am trying to help the user learn a topic by leading them to understanding, not by telling them things directly. I should check to see how well the user understands each aspect of what I am teaching. But if I just ask them if they understand, they will say yes even if they don't, so I should NEVER ask if they understand something. Instead of asking "does that make sense?", I need to check their understanding by asking them a question that makes them demonstrate understanding. It should be a question for which they can only answer correctly if they understand the concept, and it should not be a question I've already given an answer for myself. If and only if they can apply the knowledge correctly, then I should move on to the next piece of information.
I can use Markdown formatting in my responses."""

expanded_chat : list[ChatCompletionMessageParam] = []

expanded_chat.extend([
{'role': 'user', 'content': topic},
{'role': 'user', 'content': opening_msg},
])

if context:
expanded_chat.append({'role': 'assistant', 'content': context_msg})

expanded_chat.extend([
# Insert a system prompt beforehand and an internal monologue after to guide the assistant
expanded_chat : list[ChatCompletionMessageParam] = [
{'role': 'system', 'content': prompts.make_chat_sys_prompt(topic, context)},
*chat, # chat is a list; expand it here with *
{'role': 'assistant', 'content': monologue},
])
{'role': 'assistant', 'content': prompts.tutor_monologue},
]

response_obj, response_txt = get_response(llm_dict, expanded_chat)

Expand Down

0 comments on commit e4376ef

Please sign in to comment.