diff --git a/docs/source/en/optimization/fp16.mdx b/docs/source/en/optimization/fp16.mdx index c17142575311..5b7b32d6208a 100644 --- a/docs/source/en/optimization/fp16.mdx +++ b/docs/source/en/optimization/fp16.mdx @@ -20,7 +20,6 @@ We'll discuss how the following settings impact performance and memory. | ---------------- | ------- | ------- | | original | 9.50s | x1 | | cuDNN auto-tuner | 9.37s | x1.01 | -| autocast (fp16) | 5.47s | x1.74 | | fp16 | 3.61s | x2.63 | | channels last | 3.30s | x2.88 | | traced UNet | 3.21s | x2.96 | @@ -54,27 +53,9 @@ import torch torch.backends.cuda.matmul.allow_tf32 = True ``` -## Automatic mixed precision (AMP) - -If you use a CUDA GPU, you can take advantage of `torch.autocast` to perform inference roughly twice as fast at the cost of slightly lower precision. All you need to do is put your inference call inside an `autocast` context manager. The following example shows how to do it using Stable Diffusion text-to-image generation as an example: - -```Python -from torch import autocast -from diffusers import StableDiffusionPipeline - -pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") -pipe = pipe.to("cuda") - -prompt = "a photo of an astronaut riding a horse on mars" -with autocast("cuda"): - image = pipe(prompt).images[0] -``` - -Despite the precision loss, in our experience the final image results look the same as the `float32` versions. Feel free to experiment and report back! - ## Half precision weights -To save more GPU memory and get even more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them: +To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named `fp16`, and telling PyTorch to use the `float16` type when loading them: ```Python pipe = StableDiffusionPipeline.from_pretrained( @@ -88,6 +69,11 @@ prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` + + It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure + float16 precision. + + ## Sliced attention for additional memory savings For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once. diff --git a/examples/community/README.md b/examples/community/README.md index a848f74f2a29..fcf71e1659c1 100644 --- a/examples/community/README.md +++ b/examples/community/README.md @@ -640,7 +640,6 @@ from diffusers import DiffusionPipeline from PIL import Image import requests -from torch import autocast processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") @@ -659,8 +658,7 @@ image = Image.open(requests.get(url, stream=True).raw).resize((512, 512)) text = "a glass" # will mask out this text prompt = "a cup" # the masked out region will be replaced with this -with autocast("cuda"): - image = pipe(image=image, text=text, prompt=prompt).images[0] +image = pipe(image=image, text=text, prompt=prompt).images[0] ``` ### Bit Diffusion