Skip to content

Latest commit

 

History

History
69 lines (47 loc) · 2.62 KB

prompt_engineering_for_developers.md

File metadata and controls

69 lines (47 loc) · 2.62 KB

Prompt Engineering for Developers

Notes on the short course from DeepLearning.AI

General

  • Base LLM predicts next word based on text training data
  • Instruction-tuned LLM uses reinforcement learning with human feedback (RLHF) to follow instructions
  • Be aware of model hallucinations

Guidelines

Prompting principles

Write clear and specific instructions

  • Use delimiters (backticks, quotes, tags, etc.) to clearly indicate distinct parts of the input
  • Ask for structured output (HTML, JSON, YAML, etc.)
  • Ask model to check whether conditions are satisfied
  • Provide examples of the output you expect (few-shot prompting)

Give model time to think

  • Specify steps required to complete a task
  • Ask for output in specified format (not necessarily data format)
  • Instruct model to work out its own solution and compare it with an example solution

Iteration

  • Note to self: How to refer to previous prompt using OpenAI Python package?

Summarizing

  • If summaries include topics not in focus, try using extract instead of summarize
  • Use loops to generate summaries from multiple inputs

Inferring

  • Just ask for the sentiment directly (and potentially limit the output to positive or negative)
  • Ask for emotions directly (make sure to ask for structured output as well)
  • You can also ask for doing several tasks sequentially by specifying them in your prompt
  • Ask for specifying topics discussed in a text directly

Transforming

  • Translate text from one language into another (or more)
  • You can also infer the language of a prompt and then translate it into your desired language
  • You can also translate into formal and informal variations of a language
  • Check text for spelling and grammatical error by asking the model to proofread and re-write it
  • Convert from one data structure to another, i.e. from Python dict to HTML

Expanding

  • Auto-generate responses to text (emails, tickets, etc.) by instructing the model to assume a persona (service assistant) and outline the steps it should take.
  • Use temperature parameter to control randomness of model

Chat completion

  • You can use different roles and pass them to message attribute of ChatCompletion class to control the conversation
    • system: persona model assumes
    • user: user
    • assistant: model
  • In order to build a chatbot:
    • specify detailed content for the system role
    • capture input via user role and add to context
    • get response via assistant role and add to context
    • output response and start next iteration