diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 039139b95..feb6c766e 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,6 +1,6 @@ repos: - repo: https://github.com/astral-sh/ruff-pre-commit - rev: v0.1.15 + rev: v0.2.0 hooks: - id: ruff args: diff --git a/docs/source/faqs.mdx b/docs/source/faqs.mdx index 801a27b15..b9549e9d8 100644 --- a/docs/source/faqs.mdx +++ b/docs/source/faqs.mdx @@ -4,4 +4,4 @@ Please submit your questions in [this Github Discussion thread](https://github.c We'll pick the most generally applicable ones and post the QAs here or integrate them into the general documentation (also feel free to submit doc PRs, please). -# ... under construction ... \ No newline at end of file +# ... under construction ... diff --git a/docs/source/moduletree.mdx b/docs/source/moduletree.mdx index ec372f9a0..d117f90c0 100644 --- a/docs/source/moduletree.mdx +++ b/docs/source/moduletree.mdx @@ -2,4 +2,4 @@ - **bitsandbytes.functional**: Contains quantization functions (4-bit & 8-bit) and stateless 8-bit optimizer update functions. - **bitsandbytes.nn.modules**: Contains stable embedding layer with automatic 32-bit optimizer overrides (important for NLP stability) -- **bitsandbytes.optim**: Contains 8-bit optimizers. \ No newline at end of file +- **bitsandbytes.optim**: Contains 8-bit optimizers. diff --git a/docs/source/optimizers.mdx b/docs/source/optimizers.mdx index 04738a439..3a6c8ca1f 100644 --- a/docs/source/optimizers.mdx +++ b/docs/source/optimizers.mdx @@ -26,7 +26,7 @@ bnb.nn.StableEmbedding(...) The arguments passed are the same as standard Adam. For NLP models we recommend also to use the StableEmbedding layers which improves results and helps with stable 8-bit optimization. -## Overview of supported 8-bit optimizers +## Overview of supported 8-bit optimizers TOOD: List here all optimizers in `bitsandbytes/optim/__init__.py` TODO (future) have an automated API docs through doc-builder