diff --git a/docs/source/usage_guides/big_modeling.md b/docs/source/usage_guides/big_modeling.md index 20c9b3584d2..b8c5a1ac724 100644 --- a/docs/source/usage_guides/big_modeling.md +++ b/docs/source/usage_guides/big_modeling.md @@ -130,7 +130,7 @@ As a brief example, we will look at using `transformers` and loading in Big Scie ```py from transformers import AutoModelForSeq2SeqLM -model = AutoModelForSeq2SeqLM("bigscience/T0pp", device_map="auto") +model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") ``` After loading the model in, the initial steps from before to prepare a model have all been done and the model is fully @@ -140,11 +140,11 @@ specifying the precision the model is loaded into as well, through the `torch_dt ```py from transformers import AutoModelForSeq2SeqLM -model = AutoModelForSeq2SeqLM("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16) +model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto", torch_dtype=torch.float16) ``` To learn more about this, check out the 🤗 Transformers documentation available [here](https://huggingface.co/docs/transformers/main/en/main_classes/model#large-model-loading). ## Where to go from here -For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference) \ No newline at end of file +For a much more detailed look at big model inference, be sure to check out the [Conceptual Guide on it](../concept_guides/big_model_inference)