Skip to content

Commit

Permalink
keep gate in fp32 for loras
Browse files Browse the repository at this point in the history
  • Loading branch information
winglian committed Jan 12, 2024
1 parent 2dc4310 commit 05f4555
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/axolotl/utils/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@ def load_model(
# make sure these are fp32 per Ramesh et al. (2021)
embedding_modules = get_linear_embedding_layers(cfg.model_config_type)
for name, module in model.named_modules():
if "norm" in name:
if any(m in name for m in ["norm", "gate"]):
module.to(torch.float32)
if model_config.model_type == "btlm":
# don't upcast lm_head for btlm
Expand Down

0 comments on commit 05f4555

Please sign in to comment.