Skip to content

Experiments with Tree of Thoughts prompting with the LLaMA foundation model!

License

Notifications You must be signed in to change notification settings

csjiet/tree-of-thoughts-with-llama

Repository files navigation

Note: This is a forked repository from the Official Repo of Tree of Thoughts (ToT) cited below. This implementation utilizes LLaMA (Large Language Model Meta AI) for experimental purposes, which deviates from the original implementation using GPT - OpenAI API. Please refer to the link below for the official repo for its original implementation details.

Tree of Thoughts (ToT) with LLaMA

Note: Please refer to the: Official Repo of Tree of Thoughts (ToT) (linked below) for the original implementation) DOI

Contents below are from the official princeton-nlp/tree-of-thoughts-llm repository. teaser

Official paper: Tree of Thoughts: Deliberate Problem Solving with Large Language Models with code, prompts, model outputs. Also check its tweet thread in 1min.

Citation of the paper:

@misc{yao2023tree,
      title={{Tree of Thoughts}: Deliberate Problem Solving with Large Language Models}, 
      author={Shunyu Yao and Dian Yu and Jeffrey Zhao and Izhak Shafran and Thomas L. Griffiths and Yuan Cao and Karthik Narasimhan},
      year={2023},
      eprint={2305.10601},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Setup

Experiments

Run experiments via sh scripts/{game24, text, crosswords}/{standard_sampling, cot_sampling, bfs}.sh, except in crosswords we use a DFS algorithm for ToT, which can be run via scripts/crosswords/search_crosswords-dfs.ipynb.

The very simple run.py implements the ToT + BFS algorithm, as well as the naive IO/CoT sampling. Some key arguments:

  • --naive_run: if True, run naive IO/CoT sampling instead of ToT + BFS.
  • --prompt_sample (choices=[standard, cot]): sampling prompt
  • --method_generate (choices=[sample, propose]): thought generator, whether to sample independent thoughts (used in Creative Writing) or propose sequential thoughts (used in Game of 24)
  • --method_evaluate (choices=[value, vote]): state evaluator, whether to use the value states independently (used in Game of 24) or vote on states together (used in Creative Writing)
  • --n_generate_sample: number of times to prompt for thought generation
  • --n_evaluate_sample: number of times to prompt for state evaluation
  • --n_select_sample: number of states to keep from each step (i.e. b in the paper's ToT + BFS algorithm)

About

Experiments with Tree of Thoughts prompting with the LLaMA foundation model!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published