Skip to content

Commit

Permalink
replace dry-run config/functionality with leash, remove double-checker
Browse files Browse the repository at this point in the history
  • Loading branch information
granawkins committed Feb 16, 2024
1 parent 51435a7 commit a7de885
Show file tree
Hide file tree
Showing 6 changed files with 35 additions and 63 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,8 @@ Please proceed with caution. This obviously has the potential to cause harm if s
```
## Optional Arguments
* `--dry-run`: Print and manually approve each script before executing.
* `--leash`: (default False) Print and manually approve each script before executing.
* `--retries`: (default 2) If rawdog's script throws an error, review the error and try again.
## Model selection
Rawdog uses `litellm` for completions with 'gpt-4-turbo-preview' as the default. You can adjust the model or
Expand Down
2 changes: 1 addition & 1 deletion examples/update_rawdog_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
},
{
"role": "user",
"content": "LAST SCRIPT OUTPUT:\n[![Discord Follow](https://dcbadge.vercel.app/api/server/XbPdxAMJte?style=flat)](https://discord.gg/zbvd9qx9Pb)\n\n# Rawdog\n\nAn CLI assistant that responds by generating and auto-executing a Python script. \n\nhttps://github.com/AbanteAI/rawdog/assets/50287275/1417a927-58c1-424f-90a8-e8e63875dcda\n\nYou'll be surprised how useful this can be:\n- \"How many folders in my home directory are git repos?\" ... \"Plot them by disk size.\"\n- \"Give me the pd.describe() for all the csv's in this directory\"\n- \"What ports are currently active?\" ... \"What are the Google ones?\" ... \"Cancel those please.\"\n\nRawdog (Recursive Augmentation With Deterministic Output Generations) is a novel alternative to RAG\n(Retrieval Augmented Generation). Rawdog can self-select context by running scripts to print things,\nadding the output to the conversation, and then calling itself again. \n\nThis works for tasks like:\n- \"Setup the repo per the instructions in the README\"\n- \"Look at all these csv's and tell me if they can be merged or not, and why.\"\n- \"Try that again.\"\n\nPlease proceed with caution. This obviously has the potential to cause harm if so instructed.\n\n### Quickstart\n1. Install rawdog with pip:\n ```\n pip install rawdog-ai\n ```\n\n2. Export your api key. See [Model selection](#model-selection) for how to use other providers\n\n ```\n export OPENAI_API_KEY=your-api-key\n ```\n\n3. Choose a mode of interaction.\n\n Direct: Execute a single prompt and close\n ```\n rawdog Plot the size of all the files and directories in cwd\n ```\n \n Conversation: Initiate back-and-forth until you close. Rawdog can see its scripts and output.\n ```\n rawdog\n >>> What can I do for you? (Ctrl-C to exit)\n >>> > |\n ```\n\n## Optional Arguments\n* `--dry-run`: Print and manually approve each script before executing.\n\n## Model selection\nRawdog uses `litellm` for completions with 'gpt-4' as the default. You can adjust the model or\npoint it to other providers by modifying `~/.rawdog/config.yaml`. Some examples:\n\nTo use gpt-3.5 turbo a minimal config is:\n```yaml\nllm_model: gpt-3.5-turbo\n```\n\nTo run mixtral locally with ollama a minimal config is (assuming you have [ollama](https://ollama.ai/)\ninstalled and a sufficient gpu):\n```yaml\nllm_custom_provider: ollama\nllm_model: mixtral\n```\n\nTo run claude-2.1 set your API key:\n```bash\nexport ANTHROPIC_API_KEY=your-api-key\n```\nand then set your config:\n```yaml\nllm_model: claude-2.1\n```\n\nIf you have a model running at a local endpoint (or want to change the baseurl for some other reason)\nyou can set the `llm_base_url`. For instance if you have an openai compatible endpoint running at\nhttp://localhost:8000 you can set your config to:\n```\nllm_base_url: http://localhost:8000\nllm_model: openai/model # So litellm knows it's an openai compatible endpoint\n```\n\nLitellm supports a huge number of providers including Azure, VertexAi and Huggingface. See\n[their docs](https://docs.litellm.ai/docs/) for details on what environment variables, model names\nand llm_custom_providers you need to use for other providers.\n\nCONTINUE\n"
"content": "LAST SCRIPT OUTPUT:\n[![Discord Follow](https://dcbadge.vercel.app/api/server/XbPdxAMJte?style=flat)](https://discord.gg/zbvd9qx9Pb)\n\n# Rawdog\n\nAn CLI assistant that responds by generating and auto-executing a Python script. \n\nhttps://github.com/AbanteAI/rawdog/assets/50287275/1417a927-58c1-424f-90a8-e8e63875dcda\n\nYou'll be surprised how useful this can be:\n- \"How many folders in my home directory are git repos?\" ... \"Plot them by disk size.\"\n- \"Give me the pd.describe() for all the csv's in this directory\"\n- \"What ports are currently active?\" ... \"What are the Google ones?\" ... \"Cancel those please.\"\n\nRawdog (Recursive Augmentation With Deterministic Output Generations) is a novel alternative to RAG\n(Retrieval Augmented Generation). Rawdog can self-select context by running scripts to print things,\nadding the output to the conversation, and then calling itself again. \n\nThis works for tasks like:\n- \"Setup the repo per the instructions in the README\"\n- \"Look at all these csv's and tell me if they can be merged or not, and why.\"\n- \"Try that again.\"\n\nPlease proceed with caution. This obviously has the potential to cause harm if so instructed.\n\n### Quickstart\n1. Install rawdog with pip:\n ```\n pip install rawdog-ai\n ```\n\n2. Export your api key. See [Model selection](#model-selection) for how to use other providers\n\n ```\n export OPENAI_API_KEY=your-api-key\n ```\n\n3. Choose a mode of interaction.\n\n Direct: Execute a single prompt and close\n ```\n rawdog Plot the size of all the files and directories in cwd\n ```\n \n Conversation: Initiate back-and-forth until you close. Rawdog can see its scripts and output.\n ```\n rawdog\n >>> What can I do for you? (Ctrl-C to exit)\n >>> > |\n ```\n\n## Optional Arguments\n* `--leash`: Print and manually approve each script before executing.\n\n## Model selection\nRawdog uses `litellm` for completions with 'gpt-4' as the default. You can adjust the model or\npoint it to other providers by modifying `~/.rawdog/config.yaml`. Some examples:\n\nTo use gpt-3.5 turbo a minimal config is:\n```yaml\nllm_model: gpt-3.5-turbo\n```\n\nTo run mixtral locally with ollama a minimal config is (assuming you have [ollama](https://ollama.ai/)\ninstalled and a sufficient gpu):\n```yaml\nllm_custom_provider: ollama\nllm_model: mixtral\n```\n\nTo run claude-2.1 set your API key:\n```bash\nexport ANTHROPIC_API_KEY=your-api-key\n```\nand then set your config:\n```yaml\nllm_model: claude-2.1\n```\n\nIf you have a model running at a local endpoint (or want to change the baseurl for some other reason)\nyou can set the `llm_base_url`. For instance if you have an openai compatible endpoint running at\nhttp://localhost:8000 you can set your config to:\n```\nllm_base_url: http://localhost:8000\nllm_model: openai/model # So litellm knows it's an openai compatible endpoint\n```\n\nLitellm supports a huge number of providers including Azure, VertexAi and Huggingface. See\n[their docs](https://docs.litellm.ai/docs/) for details on what environment variables, model names\nand llm_custom_providers you need to use for other providers.\n\nCONTINUE\n"
},
{
"role": "assistant",
Expand Down
34 changes: 11 additions & 23 deletions src/rawdog/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,20 @@


def rawdog(prompt: str, config, llm_client):
verbose = config.get("dry_run")
leash = config.get("leash")
retries = int(config.get("retries"))
_continue = True
_first = True
while _continue is True:
error, script, output = "", "", ""
try:
if _first:
message, script = llm_client.get_script(prompt, stream=verbose)
message, script = llm_client.get_script(prompt, stream=leash)
_first = False
else:
message, script = llm_client.get_script(stream=verbose)
message, script = llm_client.get_script(stream=leash)
if script:
if verbose:
if leash:
print(f"\n{80 * '-'}")
if (
input("Execute script in markdown block? (Y/n): ")
Expand All @@ -32,20 +32,6 @@ def rawdog(prompt: str, config, llm_client):
):
llm_client.add_message("user", "User chose not to run script")
break
elif config.get("leash"):
double_check = llm_client.double_check_script(prompt, script)
if double_check:
print(script)
print(
"The leash model thought the script was unsafe for the"
" following reason:"
)
print(double_check)
if input("Execute anyway? (y/N): ").strip().lower() != "y":
llm_client.add_message(
"user", "User chose not to run script"
)
break
output, error = execute_script(script, llm_client)
elif message:
print(message)
Expand All @@ -59,24 +45,26 @@ def rawdog(prompt: str, config, llm_client):
retries -= 1
llm_client.add_message("user", f"Error: {error}")
print(f"Error: {error}")
if script and not verbose:
if script and not leash:
print(f"{80 * '-'}\n{script}\n{80 * '-'}")
if output:
llm_client.add_message("user", f"LAST SCRIPT OUTPUT:\n{output}")
if verbose or not _continue:
if leash or not _continue:
print(output)


def banner(config):
if config.get("dry_run") or config.get("leash"):
print(f""" / \__
if config.get("leash"):
print(f"""\
/ \__
_ ( @\___ ┳┓┏┓┏ ┓┳┓┏┓┏┓
\ / O ┣┫┣┫┃┃┃┃┃┃┃┃┓
\ / (_____/ ┛┗┛┗┗┻┛┻┛┗┛┗┛
\/\/\/\/ U Rawdog v{__version__}
OO""")
else:
print(f""" / \__
print(f"""\
/ \__
( @\___ ┳┓┏┓┏ ┓┳┓┏┓┏┓
/ O ┣┫┣┫┃┃┃┃┃┃┃┃┓
/ (_____/ ┛┗┛┗┗┻┛┻┛┗┛┗┛
Expand Down
30 changes: 20 additions & 10 deletions src/rawdog/config.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import yaml

from rawdog import __version__
from rawdog.utils import rawdog_dir

config_path = rawdog_dir / "config.yaml"
Expand All @@ -11,17 +12,15 @@
"llm_model": "gpt-4-turbo-preview",
"llm_custom_provider": None,
"llm_temperature": 1.0,
"dry_run": False,
"retries": 2,
"leash_model": "gpt-3.5-turbo",
"leash": False,
}
# NOTE: dry-run was replaced with leash on v0.1.4. There is code below to handle
# the transition, which should be removed eventually.

setting_descriptions = {
"dry_run": "Print the script before executing and prompt for confirmation.",
"retries": "If the script fails, retry this many times before giving up.",
"leash_model": "The model to use for the leash feature.",
"leash": "If set, the script will be double checked before running.",
"leash": "Print the script before executing and prompt for confirmation.",
}


Expand All @@ -34,12 +33,19 @@ def read_config_file():
if config_path.exists():
with open(config_path, "r") as f:
_config = yaml.safe_load(f)
missing_fields = [
k for k in default_config if k not in _config or _config[k] is None
]
missing_fields = {
k: v for k, v in default_config.items()
if k not in _config or (v is not None and _config[k] is None)
}
if missing_fields:
for k in missing_fields:
_config[k] = default_config[k]
print(f"Updating config file {config_path} for version {__version__}:")
if "leash" in missing_fields and _config.get("dry_run"):
missing_fields["leash"] = True
del _config["dry_run"]
print(" - dry_run: deprecated on v0.1.4, setting leash=True instead")
for k, v in missing_fields.items():
print(f" + {k}: {v}")
_config[k] = v
with open(config_path, "w") as f:
yaml.safe_dump(_config, f)
else:
Expand All @@ -60,6 +66,7 @@ def add_config_flags_to_argparser(parser):
parser.add_argument(f"--{normalized}", action="store_true", help=help_text)
else:
parser.add_argument(f"--{normalized}", default=None, help=help_text)
parser.add_argument("--dry-run", action="store_true", help="Deprecated, use --leash instead)")


def get_config(args=None):
Expand All @@ -71,4 +78,7 @@ def get_config(args=None):
if k in default_config and v is not None and v is not False
}
config = {**config, **config_args}
if config.get("dry_run"):
del config["dry_run"]
print("Warning: --dry-run is deprecated, use --leash instead")
return config
24 changes: 1 addition & 23 deletions src/rawdog/llm_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from rawdog.logging import log_conversation
from rawdog.parsing import parse_script
from rawdog.prompts import leash_prompt, script_examples, script_prompt
from rawdog.prompts import script_examples, script_prompt
from rawdog.utils import EnvInfo, rawdog_log_path


Expand Down Expand Up @@ -44,28 +44,6 @@ def __init__(self, config: dict):
def add_message(self, role: str, content: str):
self.conversation.append({"role": role, "content": content})

def double_check_script(self, original_prompt: str, script: str) -> Optional[str]:
conversation = [
{"role": "system", "content": leash_prompt},
{"role": "user", "content": original_prompt},
{"role": "assistant", "content": script},
]

response = completion(
base_url=self.config.get("llm_base_url"),
model=self.config.get("leash_model"),
messages=conversation,
temperature=0.01,
custom_llm_provider=self.config.get("llm_custom_provider"),
)

content = response.choices[0].message.content

if "SAFE" in content:
return None
else:
return content

def get_python_package(self, import_name: str):
base_url = self.config.get("llm_base_url")
model = self.config.get("llm_model")
Expand Down
5 changes: 0 additions & 5 deletions src/rawdog/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,3 @@ def get_name(f):
```
-------------------------------------------------------------------------------
"""

leash_prompt = """\
Your job is to double check a script for safety. You will be given a prompt and the python script generated from the prompt. \
If the script won't have any side effects except the minimum necessary to accomplish the goal of the prompt then you should respond with "SAFE" \
and nothing else. If you think the script is unsafe in any way return a short explanation of why."""

0 comments on commit a7de885

Please sign in to comment.