This project explores the capabilities of Large Language Models (LLMs) in mathematical reasoning tasks. We investigate how LLMs can be leveraged to solve mathematical problems, understand mathematical concepts.
- Implementation of LLM-based mathematical problem-solving
- Evaluation of LLM performance on various mathematical tasks
- Comparison with traditional mathematical reasoning systems
the llm checkpoints used in this project: mistral-7b, llama2-7b, aritho_math7b
- https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
- https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- https://huggingface.co/akjindal53244/Arithmo-Mistral-7B
pretrained semantic similarity model: para_minilmL12v2, download link:
- Python 3.8+
- PyTorch 1.9+
- Transformers library
- (Add other relevant libraries)
git clone https://github.com/derby-ding/llm-math-reasoning.git
cd llm-math-reasoning
pip install -r requirements.txt
###please change the sentence similarity model path in ragenh_math_gsm.py before running python ragenh_math_gsm.py --promptex sim_cot_sc --infile data/gsm8k_test_formu1.json --model_path your/mistral7b/ --RAG_path data/gsm8k_explanqwenmax2.json.json
##contrastive windows python selfcorrect_math.py --promptex sim_cot_sc --infile data/gsm8k_main_test.json --model_path your/mistral7b/ --outfile data/gsm8k_contrast.json --shot _num 3
math-reasoning-llm/
├── data/
├── README.md
└── requirements.txt
This project is licensed under the MIT License - see the LICENSE.md file for details.
For any queries, please open an issue or contact [[email protected]]