Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
  • Loading branch information
younesbelkada committed Feb 2, 2024
1 parent c26645b commit ab42c5f
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/source/optimizers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ If you want to optimize some unstable parameters with 32-bit Adam and others wit

For global overrides in many different places in your code you can do:

```python
```py
import torch
import bitsandbytes as bnb

Expand All @@ -120,7 +120,7 @@ mng.override_config([model.special.weight, model.also_special.weight],
Possible options for the config override are: `betas, eps, weight_decay, lr, optim_bits, min_8bit_size, percentile_clipping, block_wise, max_unorm`

For overrides for particular layers we recommend overriding locally in each module. You can do this by passing the module, the parameter, and its attribute name to the GlobalOptimManager:
```python
```py
class MyModule(torch.nn.Module):
def __init__(din, dout):
super(MyModule, self).__init__()
Expand Down
1 change: 1 addition & 0 deletions docs/source/resources.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Authors: Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```

## [The case for 4-bit precision: k-bit Inference Scaling Laws (Dec 2022)](https://arxiv.org/abs/2212.09720)
Authors: Tim Dettmers, Luke Zettlemoyer
Expand Down

0 comments on commit ab42c5f

Please sign in to comment.