Skip to content

Commit

Permalink
update readme to point to direct link to runpod template, cleanup ins…
Browse files Browse the repository at this point in the history
…tall instrucitons (#532)

* update readme to point to direct link to runpod template, cleanup install instrucitons

* default install flash-attn and auto-gptq now too

* update readme w flash-attn extra

* fix version in setup
  • Loading branch information
winglian authored Sep 8, 2023
1 parent 5e2d8a4 commit 34c0a86
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 28 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ jobs:

- name: Install dependencies
run: |
pip install -e .
pip install -r requirements-tests.txt
pip3 install -e .
pip3 install -r requirements-tests.txt
- name: Run tests
run: |
Expand Down
20 changes: 4 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,7 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
```bash
docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.10-cu118-2.0.1
```
- `winglian/axolotl-runpod:main-py3.10-cu118-2.0.1`: for runpod
- `winglian/axolotl-runpod:main-py3.9-cu118-2.0.1-gptq`: for gptq
- `winglian/axolotl-runpod:main-latest`: for runpod or use this [direct link](https://runpod.io/gsc?template=v2ickqhz9s&ref=6i7fkpdz)

Or run on the current files for development:

Expand All @@ -104,19 +103,9 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \

2. Install pytorch stable https://pytorch.org/get-started/locally/

3. Install python dependencies with ONE of the following:
- Recommended, supports QLoRA, NO gptq/int4 support
3. Install axolotl along with python dependencies
```bash
pip3 install -e .
pip3 install -U git+https://github.com/huggingface/peft.git
```
- gptq/int4 support, NO QLoRA
```bash
pip3 install -e .[gptq]
```
- same as above but not recommended
```bash
pip3 install -e .[gptq_triton]
pip3 install -e .[flash-attn]
```

- LambdaLabs
Expand Down Expand Up @@ -151,10 +140,9 @@ accelerate launch scripts/finetune.py examples/openllama-3b/lora.yml \
git clone https://github.com/OpenAccess-AI-Collective/axolotl
cd axolotl
pip3 install -e . # change depend on needs
pip3 install -e .
pip3 install protobuf==3.20.3
pip3 install -U --ignore-installed requests Pillow psutil scipy
pip3 install git+https://github.com/huggingface/peft.git # not for gptq
```

5. Set path
Expand Down
4 changes: 2 additions & 2 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ RUN git clone --depth=1 https://github.com/OpenAccess-AI-Collective/axolotl.git
# If AXOLOTL_EXTRAS is set, append it in brackets
RUN cd axolotl && \
if [ "$AXOLOTL_EXTRAS" != "" ] ; then \
pip install -e .[flash-attn,gptq,$AXOLOTL_EXTRAS]; \
pip install -e .[flash-attn,$AXOLOTL_EXTRAS]; \
else \
pip install -e .[flash-attn,gptq]; \
pip install -e .[flash-attn]; \
fi

# fix so that git fetch/pull from remote works
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ evaluate
fire
PyYAML>=6.0
datasets
flash-attn>=2.0.8
flash-attn>=2.2.1
sentencepiece
wandb
einops
Expand Down
9 changes: 2 additions & 7 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,7 @@ def parse_requirements():
_install_requires = []
_dependency_links = []
with open("./requirements.txt", encoding="utf-8") as requirements_file:
lines = [
r.strip() for r in requirements_file.readlines() if "auto-gptq" not in r
]
lines = [r.strip() for r in requirements_file.readlines()]
for line in lines:
if line.startswith("--extra-index-url"):
# Handle custom index URLs
Expand All @@ -33,11 +31,8 @@ def parse_requirements():
install_requires=install_requires,
dependency_links=dependency_links,
extras_require={
"gptq": [
"auto-gptq",
],
"flash-attn": [
"flash-attn==2.0.8",
"flash-attn>=2.2.1",
],
"extras": [
"deepspeed",
Expand Down

0 comments on commit 34c0a86

Please sign in to comment.