From f5af471df8b9cf46ee0b408bb87f122ca2c852a2 Mon Sep 17 00:00:00 2001 From: rasbt Date: Mon, 26 Feb 2024 09:51:49 -0600 Subject: [PATCH] mention PT lightning --- docs/source/integrations.mdx | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/docs/source/integrations.mdx b/docs/source/integrations.mdx index 8ee5e3844..67d50d6a0 100644 --- a/docs/source/integrations.mdx +++ b/docs/source/integrations.mdx @@ -31,9 +31,18 @@ Please review the [bitsandbytes section in the Accelerate docs](https://huggingf +# PyTorch Lightning and Lightning Fabric + +Bitsandbytes is available from within both +- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale; +- and [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate). + +Please review the [bitsandbytes section in the PyTorch Lightning docs](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes). + + # Lit-GPT -Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models released under the Apache 2.0 license, where it can be used for quantization during training, finetuning, and inference. +Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models, based on Lightning Fabric, where it can be used for quantization during training, finetuning, and inference. Please review the [bitsandbytes section in the Lit-GPT quantization docs](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md).