The Prompt Leakage Probing project provides a framework for testing Large Language Model (LLM) agents for their susceptibility to system prompt leaks, it was implemented using FastAgency and AutoGen. It currently implements two attack strategies:
- Simple Attack: Uses
PromptGeneratorAgent
andPromptLeakageClassifierAgent
to attempt prompt extraction. - Base64 Attack: Enables
PromptGeneratorAgent
to encode sensitive parts of the prompt in Base64 to bypass sensitive prompt detection.
Ensure you have the following installed:
- Python >=3.10
Additionally, ensure that your OPENAI_API_KEY
is exported to your environment.
Clone the repository and install the dependencies:
pip install ."[dev]"
Start the application using the provided script:
./scripts/run_fastapi_locally.sh
This will start the FastAgency FastAPI provider and Mesop provider instances. You can then access the application through your browser.
The project comes with Devcontainers configured, enabling a streamlined development environment setup if you use tools like Visual Studio Code with the Devcontainer extension.
When you open the application in your browser, you'll first see the workflow selection screen.
After selecting "Attempt to leak the prompt from selected LLM model", you will start a workflow for probing the LLM for prompt leakage. During this process, you will:
- Select the prompt leakage scenario you want to test.
- Choose the model you want to test.
- Specify the number of attempts to leak the prompt in the chat.
The PromptGeneratorAgent
will then generate adversarial prompts aimed at making the tested agent leak its prompt. After each response from the tested agent, the PromptLeakageClassifierAgent
will analyze the response and report the level of prompt leakage.
In this step, the PromptGeneratorAgent
generates adversarial prompts designed to elicit prompt leakage from the tested agent. The tested model is then probed with the generated prompt using a function call.
The tested agent's response is the returned value from the function call initiated by the PromptGeneratorAgent
. This response represents how the agent reacts to the probing prompt and serves as a input for subsequent analysis.
The response is then passed to the PromptLeakageClassifierAgent
, which evaluates it for signs of prompt leakage. This evaluation informs whether sensitive information from the original prompt has been exposed. Additionally, the response may guide further prompt refinement, enabling an iterative approach to testing the agent's resilience against prompt leakage attempts.
All response classifications are saved as CSV files in the reports
folder. These files contain the prompt, response, reasoning, and leakage level. They are used to display the reports flow, which we will now demonstrate.
In the workflow selection screen, select "Report on the prompt leak attempt". This workflow provides a detailed report for each prompt leakage scenario and model combination that has been tested.
The project includes three tested model endpoints that are started alongside the service. These endpoints are used to demonstrate the prompt leakage workflow and can be accessed through predefined routes. The source code for these models is located in the tested_chatbots
folder.
Model | Description | Endpoint |
---|---|---|
Easy | Uses a basic prompt without any hardening techniques. No canary words are included, and no LLM guardrail is applied. | /low |
Medium | Applies prompt hardening techniques to improve robustness over the easy model but still lacks canary words or guardrail. | /medium |
Hard | Combines prompt hardening with the addition of canary words and the use of a guardrail for better protection. | /high |
The endpoints for these models are defined in the tested_chatbots/chatbots_router
file. They are part of the FastAPI provider and are available under the following paths:
/low
: Easy model endpoint./medium
: Medium model endpoint./high
: Hard model endpoint.
These endpoints demonstrate different levels of susceptibility to prompt leakage and serve as examples to test the implemented agents and scenarios.