Skip to content

Commit

Permalink
Replace dry-run with leash (#68)
Browse files Browse the repository at this point in the history
* Add leash mode

If leash mode is enabled the script is double checked with the
leash_model before execution and a message is printed if its deemed
unsafe.

A cute leash is added to the ascii art dog.

* black

* isort

* replace dry-run config/functionality with leash, remove double-checker

* format fix

---------

Co-authored-by: granawkins <[email protected]>
  • Loading branch information
jakethekoenig and granawkins authored Feb 16, 2024
1 parent 662dbb2 commit 911f05e
Show file tree
Hide file tree
Showing 4 changed files with 52 additions and 22 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,8 @@ Please proceed with caution. This obviously has the potential to cause harm if s
```
## Optional Arguments
* `--dry-run`: Print and manually approve each script before executing.
* `--leash`: (default False) Print and manually approve each script before executing.
* `--retries`: (default 2) If rawdog's script throws an error, review the error and try again.
## Model selection
Rawdog uses `litellm` for completions with 'gpt-4-turbo-preview' as the default. You can adjust the model or
Expand Down
2 changes: 1 addition & 1 deletion examples/update_rawdog_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
},
{
"role": "user",
"content": "LAST SCRIPT OUTPUT:\n[![Discord Follow](https://dcbadge.vercel.app/api/server/XbPdxAMJte?style=flat)](https://discord.gg/zbvd9qx9Pb)\n\n# Rawdog\n\nAn CLI assistant that responds by generating and auto-executing a Python script. \n\nhttps://github.com/AbanteAI/rawdog/assets/50287275/1417a927-58c1-424f-90a8-e8e63875dcda\n\nYou'll be surprised how useful this can be:\n- \"How many folders in my home directory are git repos?\" ... \"Plot them by disk size.\"\n- \"Give me the pd.describe() for all the csv's in this directory\"\n- \"What ports are currently active?\" ... \"What are the Google ones?\" ... \"Cancel those please.\"\n\nRawdog (Recursive Augmentation With Deterministic Output Generations) is a novel alternative to RAG\n(Retrieval Augmented Generation). Rawdog can self-select context by running scripts to print things,\nadding the output to the conversation, and then calling itself again. \n\nThis works for tasks like:\n- \"Setup the repo per the instructions in the README\"\n- \"Look at all these csv's and tell me if they can be merged or not, and why.\"\n- \"Try that again.\"\n\nPlease proceed with caution. This obviously has the potential to cause harm if so instructed.\n\n### Quickstart\n1. Install rawdog with pip:\n ```\n pip install rawdog-ai\n ```\n\n2. Export your api key. See [Model selection](#model-selection) for how to use other providers\n\n ```\n export OPENAI_API_KEY=your-api-key\n ```\n\n3. Choose a mode of interaction.\n\n Direct: Execute a single prompt and close\n ```\n rawdog Plot the size of all the files and directories in cwd\n ```\n \n Conversation: Initiate back-and-forth until you close. Rawdog can see its scripts and output.\n ```\n rawdog\n >>> What can I do for you? (Ctrl-C to exit)\n >>> > |\n ```\n\n## Optional Arguments\n* `--dry-run`: Print and manually approve each script before executing.\n\n## Model selection\nRawdog uses `litellm` for completions with 'gpt-4' as the default. You can adjust the model or\npoint it to other providers by modifying `~/.rawdog/config.yaml`. Some examples:\n\nTo use gpt-3.5 turbo a minimal config is:\n```yaml\nllm_model: gpt-3.5-turbo\n```\n\nTo run mixtral locally with ollama a minimal config is (assuming you have [ollama](https://ollama.ai/)\ninstalled and a sufficient gpu):\n```yaml\nllm_custom_provider: ollama\nllm_model: mixtral\n```\n\nTo run claude-2.1 set your API key:\n```bash\nexport ANTHROPIC_API_KEY=your-api-key\n```\nand then set your config:\n```yaml\nllm_model: claude-2.1\n```\n\nIf you have a model running at a local endpoint (or want to change the baseurl for some other reason)\nyou can set the `llm_base_url`. For instance if you have an openai compatible endpoint running at\nhttp://localhost:8000 you can set your config to:\n```\nllm_base_url: http://localhost:8000\nllm_model: openai/model # So litellm knows it's an openai compatible endpoint\n```\n\nLitellm supports a huge number of providers including Azure, VertexAi and Huggingface. See\n[their docs](https://docs.litellm.ai/docs/) for details on what environment variables, model names\nand llm_custom_providers you need to use for other providers.\n\nCONTINUE\n"
"content": "LAST SCRIPT OUTPUT:\n[![Discord Follow](https://dcbadge.vercel.app/api/server/XbPdxAMJte?style=flat)](https://discord.gg/zbvd9qx9Pb)\n\n# Rawdog\n\nAn CLI assistant that responds by generating and auto-executing a Python script. \n\nhttps://github.com/AbanteAI/rawdog/assets/50287275/1417a927-58c1-424f-90a8-e8e63875dcda\n\nYou'll be surprised how useful this can be:\n- \"How many folders in my home directory are git repos?\" ... \"Plot them by disk size.\"\n- \"Give me the pd.describe() for all the csv's in this directory\"\n- \"What ports are currently active?\" ... \"What are the Google ones?\" ... \"Cancel those please.\"\n\nRawdog (Recursive Augmentation With Deterministic Output Generations) is a novel alternative to RAG\n(Retrieval Augmented Generation). Rawdog can self-select context by running scripts to print things,\nadding the output to the conversation, and then calling itself again. \n\nThis works for tasks like:\n- \"Setup the repo per the instructions in the README\"\n- \"Look at all these csv's and tell me if they can be merged or not, and why.\"\n- \"Try that again.\"\n\nPlease proceed with caution. This obviously has the potential to cause harm if so instructed.\n\n### Quickstart\n1. Install rawdog with pip:\n ```\n pip install rawdog-ai\n ```\n\n2. Export your api key. See [Model selection](#model-selection) for how to use other providers\n\n ```\n export OPENAI_API_KEY=your-api-key\n ```\n\n3. Choose a mode of interaction.\n\n Direct: Execute a single prompt and close\n ```\n rawdog Plot the size of all the files and directories in cwd\n ```\n \n Conversation: Initiate back-and-forth until you close. Rawdog can see its scripts and output.\n ```\n rawdog\n >>> What can I do for you? (Ctrl-C to exit)\n >>> > |\n ```\n\n## Optional Arguments\n* `--leash`: Print and manually approve each script before executing.\n\n## Model selection\nRawdog uses `litellm` for completions with 'gpt-4' as the default. You can adjust the model or\npoint it to other providers by modifying `~/.rawdog/config.yaml`. Some examples:\n\nTo use gpt-3.5 turbo a minimal config is:\n```yaml\nllm_model: gpt-3.5-turbo\n```\n\nTo run mixtral locally with ollama a minimal config is (assuming you have [ollama](https://ollama.ai/)\ninstalled and a sufficient gpu):\n```yaml\nllm_custom_provider: ollama\nllm_model: mixtral\n```\n\nTo run claude-2.1 set your API key:\n```bash\nexport ANTHROPIC_API_KEY=your-api-key\n```\nand then set your config:\n```yaml\nllm_model: claude-2.1\n```\n\nIf you have a model running at a local endpoint (or want to change the baseurl for some other reason)\nyou can set the `llm_base_url`. For instance if you have an openai compatible endpoint running at\nhttp://localhost:8000 you can set your config to:\n```\nllm_base_url: http://localhost:8000\nllm_model: openai/model # So litellm knows it's an openai compatible endpoint\n```\n\nLitellm supports a huge number of providers including Azure, VertexAi and Huggingface. See\n[their docs](https://docs.litellm.ai/docs/) for details on what environment variables, model names\nand llm_custom_providers you need to use for other providers.\n\nCONTINUE\n"
},
{
"role": "assistant",
Expand Down
36 changes: 23 additions & 13 deletions src/rawdog/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,20 @@


def rawdog(prompt: str, config, llm_client):
verbose = config.get("dry_run")
leash = config.get("leash")
retries = int(config.get("retries"))
_continue = True
_first = True
while _continue is True:
error, script, output = "", "", ""
try:
if _first:
message, script = llm_client.get_script(prompt, stream=verbose)
message, script = llm_client.get_script(prompt, stream=leash)
_first = False
else:
message, script = llm_client.get_script(stream=verbose)
message, script = llm_client.get_script(stream=leash)
if script:
if verbose:
if leash:
print(f"\n{80 * '-'}")
if (
input("Execute script in markdown block? (Y/n): ")
Expand All @@ -45,20 +45,30 @@ def rawdog(prompt: str, config, llm_client):
retries -= 1
llm_client.add_message("user", f"Error: {error}")
print(f"Error: {error}")
if script and not verbose:
if script and not leash:
print(f"{80 * '-'}\n{script}\n{80 * '-'}")
if output:
llm_client.add_message("user", f"LAST SCRIPT OUTPUT:\n{output}")
if verbose or not _continue:
if leash or not _continue:
print(output)


def banner():
print(f""" / \__
( @\___ ┳┓┏┓┏ ┓┳┓┏┓┏┓
/ O ┣┫┣┫┃┃┃┃┃┃┃┃┓
/ (_____/ ┛┗┛┗┗┻┛┻┛┗┛┗┛
/_____/ U Rawdog v{__version__}""")
def banner(config):
if config.get("leash"):
print(f"""\
/ \__
_ ( @\___ ┳┓┏┓┏ ┓┳┓┏┓┏┓
\ / O ┣┫┣┫┃┃┃┃┃┃┃┃┓
\ / (_____/ ┛┗┛┗┗┻┛┻┛┗┛┗┛
\/\/\/\/ U Rawdog v{__version__}
OO""")
else:
print(f"""\
/ \__
( @\___ ┳┓┏┓┏ ┓┳┓┏┓┏┓
/ O ┣┫┣┫┃┃┃┃┃┃┃┃┓
/ (_____/ ┛┗┛┗┗┻┛┻┛┗┛┗┛
/_____/ U Rawdog v{__version__}""")


def main():
Expand All @@ -84,7 +94,7 @@ def main():
if len(args.prompt) > 0:
rawdog(" ".join(args.prompt), config, llm_client)
else:
banner()
banner(config)
while True:
try:
print("")
Expand Down
33 changes: 26 additions & 7 deletions src/rawdog/config.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import yaml

from rawdog import __version__
from rawdog.utils import rawdog_dir

config_path = rawdog_dir / "config.yaml"
Expand All @@ -11,13 +12,15 @@
"llm_model": "gpt-4-turbo-preview",
"llm_custom_provider": None,
"llm_temperature": 1.0,
"dry_run": False,
"retries": 2,
"leash": False,
}
# NOTE: dry-run was replaced with leash on v0.1.4. There is code below to handle
# the transition, which should be removed eventually.

setting_descriptions = {
"dry_run": "Print the script before executing and prompt for confirmation.",
"retries": "If the script fails, retry this many times before giving up.",
"leash": "Print the script before executing and prompt for confirmation.",
}


Expand All @@ -30,12 +33,22 @@ def read_config_file():
if config_path.exists():
with open(config_path, "r") as f:
_config = yaml.safe_load(f)
missing_fields = [
k for k in default_config if k not in _config or _config[k] is None
]
missing_fields = {
k: v
for k, v in default_config.items()
if k not in _config or (v is not None and _config[k] is None)
}
if missing_fields:
for k in missing_fields:
_config[k] = default_config[k]
print(f"Updating config file {config_path} for version {__version__}:")
if "leash" in missing_fields and _config.get("dry_run"):
missing_fields["leash"] = True
del _config["dry_run"]
print(
" - dry_run: deprecated on v0.1.4, setting leash=True instead"
)
for k, v in missing_fields.items():
print(f" + {k}: {v}")
_config[k] = v
with open(config_path, "w") as f:
yaml.safe_dump(_config, f)
else:
Expand All @@ -56,6 +69,9 @@ def add_config_flags_to_argparser(parser):
parser.add_argument(f"--{normalized}", action="store_true", help=help_text)
else:
parser.add_argument(f"--{normalized}", default=None, help=help_text)
parser.add_argument(
"--dry-run", action="store_true", help="Deprecated, use --leash instead)"
)


def get_config(args=None):
Expand All @@ -67,4 +83,7 @@ def get_config(args=None):
if k in default_config and v is not None and v is not False
}
config = {**config, **config_args}
if config.get("dry_run"):
del config["dry_run"]
print("Warning: --dry-run is deprecated, use --leash instead")
return config

0 comments on commit 911f05e

Please sign in to comment.