Skip to content

Commit

Permalink
Merge branch 'main' into accelerate-docs-update
Browse files Browse the repository at this point in the history
  • Loading branch information
Titus-von-Koeller authored Feb 26, 2024
2 parents b935ecf + a03df43 commit 3e3d375
Showing 1 changed file with 20 additions and 1 deletion.
21 changes: 20 additions & 1 deletion docs/source/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dty
# PEFT
With `PEFT`, you can use QLoRA out of the box with `LoraConfig` and a 4-bit base model.

Please review the [`bitsandbytes` section in the Accelerate docs](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model).
Please review the [bitsandbytes section in the PEFT docs](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model).

# Accelerate

Expand Down Expand Up @@ -57,6 +57,25 @@ quantized_model = load_and_quantize_model(

For further details, e.g. model saving, cpu-offloading andfine-tuning, please review the [`bitsandbytes` section in the Accelerate docs](https://huggingface.co/docs/accelerate/en/usage_guides/quantization).



# PyTorch Lightning and Lightning Fabric

Bitsandbytes is available from within both
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale;
- and [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate).

Please review the [bitsandbytes section in the PyTorch Lightning docs](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes).


# Lit-GPT

Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models, based on Lightning Fabric, where it can be used for quantization during training, finetuning, and inference.

Please review the [bitsandbytes section in the Lit-GPT quantization docs](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md).



# Trainer for the optimizers

You can use any of the 8-bit and/or paged optimizers by simple passing them to the `transformers.Trainer` class on initialization.All bnb optimizers are supported by passing the correct string in `TrainingArguments`'s `optim` attribute - e.g. (`paged_adamw_32bit`).
Expand Down

0 comments on commit 3e3d375

Please sign in to comment.