Skip to content

Commit

Permalink
lit-gpt integration
Browse files Browse the repository at this point in the history
  • Loading branch information
rasbt committed Feb 26, 2024
1 parent 1f36bd4 commit 1715fd3
Showing 1 changed file with 10 additions and 0 deletions.
10 changes: 10 additions & 0 deletions docs/source/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,16 @@ Bitsandbytes is also easily usable from within Accelerate.

Please review the [bitsandbytes section in the Accelerate docs](https://huggingface.co/docs/accelerate/en/usage_guides/quantization).



# Lit-GPT

Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models released under the Apache 2.0 license, where it can be used for quantization during training, finetuning, and inference.

Please review the [bitsandbytes section in the Lit-GPT quantization docs](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md).



# Trainer for the optimizers

You can use any of the 8-bit and/or paged optimizers by simple passing them to the `transformers.Trainer` class on initialization.All bnb optimizers are supported by passing the correct string in `TrainingArguments`'s `optim` attribute - e.g. (`paged_adamw_32bit`).
Expand Down

0 comments on commit 1715fd3

Please sign in to comment.