From 77930f8a01d5a18af88335c60b86068f10b647f7 Mon Sep 17 00:00:00 2001 From: Steven Liu <59462357+stevhliu@users.noreply.github.com> Date: Tue, 31 Oct 2023 09:44:51 -0700 Subject: [PATCH] [docs] Update CPU/GPU inference docs (#26881) * first draft * remove non-existent paths * edits * feedback * feedback and optimum * Apply suggestions from code review Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * redirect to correct doc * _redirects.yml --------- Co-authored-by: regisss <15324346+regisss@users.noreply.github.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> --- docs/source/en/_redirects.yml | 3 + docs/source/en/_toctree.yml | 8 +- docs/source/en/perf_infer_cpu.md | 108 +++++--- docs/source/en/perf_infer_gpu_many.md | 124 ---------- docs/source/en/perf_infer_gpu_one.md | 342 ++++++++++---------------- docs/source/en/perf_infer_special.md | 18 -- docs/source/en/performance.md | 2 +- src/transformers/utils/logging.py | 4 +- utils/not_doctested.txt | 2 - 9 files changed, 215 insertions(+), 396 deletions(-) create mode 100644 docs/source/en/_redirects.yml delete mode 100644 docs/source/en/perf_infer_gpu_many.md delete mode 100644 docs/source/en/perf_infer_special.md diff --git a/docs/source/en/_redirects.yml b/docs/source/en/_redirects.yml new file mode 100644 index 00000000000000..0dd4d2bfb34b2f --- /dev/null +++ b/docs/source/en/_redirects.yml @@ -0,0 +1,3 @@ +# Optimizing inference + +perf_infer_gpu_many: perf_infer_gpu_one \ No newline at end of file diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index 1a76762d160f88..4d434b7a18c154 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -155,13 +155,9 @@ title: Efficient training techniques - sections: - local: perf_infer_cpu - title: Inference on CPU + title: CPU inference - local: perf_infer_gpu_one - title: Inference on one GPU - - local: perf_infer_gpu_many - title: Inference on many GPUs - - local: perf_infer_special - title: Inference on Specialized Hardware + title: GPU inference title: Optimizing inference - local: big_models title: Instantiating a big model diff --git a/docs/source/en/perf_infer_cpu.md b/docs/source/en/perf_infer_cpu.md index a7a524ae1ef039..f10fc01e7ca6f1 100644 --- a/docs/source/en/perf_infer_cpu.md +++ b/docs/source/en/perf_infer_cpu.md @@ -13,46 +13,48 @@ rendered properly in your Markdown viewer. --> -# Efficient Inference on CPU +# CPU inference -This guide focuses on inferencing large models efficiently on CPU. +With some optimizations, it is possible to efficiently run large model inference on a CPU. One of these optimization techniques involves compiling the PyTorch code into an intermediate format for high-performance environments like C++. The other technique fuses multiple operations into one kernel to reduce the overhead of running each operation separately. -## `BetterTransformer` for faster inference +You'll learn how to use [BetterTransformer](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) for faster inference, and how to convert your PyTorch code to [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html). If you're using an Intel CPU, you can also use [graph optimizations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features.html#graph-optimization) from [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/index.html) to boost inference speed even more. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime or OpenVINO (if you're using an Intel CPU). -We have recently integrated `BetterTransformer` for faster inference on CPU for text, image and audio models. Check the documentation about this integration [here](https://huggingface.co/docs/optimum/bettertransformer/overview) for more details. +## BetterTransformer -## PyTorch JIT-mode (TorchScript) -TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. -Comparing to default eager mode, jit mode in PyTorch normally yields better performance for model inference from optimization methodologies like operator fusion. +BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: -For a gentle introduction to TorchScript, see the Introduction to [PyTorch TorchScript tutorial](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules). +1. fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps +2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors -### IPEX Graph Optimization with JIT-mode -Intel® Extension for PyTorch provides further optimizations in jit mode for Transformers series models. It is highly recommended for users to take advantage of Intel® Extension for PyTorch with jit mode. Some frequently used operator patterns from Transformers models are already supported in Intel® Extension for PyTorch with jit mode fusions. Those fusion patterns like Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. are enabled and perform well. The benefit of the fusion is delivered to users in a transparent fashion. According to the analysis, ~70% of most popular NLP tasks in question-answering, text-classification, and token-classification can get performance benefits with these fusion patterns for both Float32 precision and BFloat16 Mixed precision. +BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention). -Check more detailed information for [IPEX Graph Optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html). + -#### IPEX installation: +BetterTransformer is not supported for all models. Check this [list](https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models) to see if a model supports BetterTransformer. -IPEX release is following PyTorch, check the approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/). + -### Usage of JIT-mode -To enable JIT-mode in Trainer for evaluaion or prediction, users should add `jit_mode_eval` in Trainer command arguments. +Before you start, make sure you have 🤗 Optimum [installed](https://huggingface.co/docs/optimum/installation). - +Enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method: -for PyTorch >= 1.14.0. JIT-mode could benefit any models for prediction and evaluaion since dict input is supported in jit.trace +```py +from transformers import AutoModelForCausalLM -for PyTorch < 1.14.0. JIT-mode could benefit models whose forward parameter order matches the tuple input order in jit.trace, like question-answering model -In the case where the forward parameter order does not match the tuple input order in jit.trace, like text-classification models, jit.trace will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users. +model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder") +model.to_bettertransformer() +``` - +## TorchScript + +TorchScript is an intermediate PyTorch model representation that can be run in production environments where performance is important. You can train a model in PyTorch and then export it to TorchScript to free the model from Python performance constraints. PyTorch [traces](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) a model to return a [`ScriptFunction`] that is optimized with just-in-time compilation (JIT). Compared to the default eager mode, JIT mode in PyTorch typically yields better performance for inference using optimization techniques like operator fusion. -Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) +For a gentle introduction to TorchScript, see the [Introduction to PyTorch TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) tutorial. +With the [`Trainer`] class, you can enable JIT mode for CPU inference by setting the `--jit_mode_eval` flag: -- Inference using jit mode on CPU: -
python run_qa.py \
+```bash
+python run_qa.py \
 --model_name_or_path csarron/bert-base-uncased-squad-v1 \
 --dataset_name squad \
 --do_eval \
@@ -60,10 +62,31 @@ Take an example of the use cases on [Transformers question-answering](https://gi
 --doc_stride 128 \
 --output_dir /tmp/ \
 --no_cuda \
---jit_mode_eval 
+--jit_mode_eval +``` + + + +For PyTorch >= 1.14.0, JIT-mode could benefit any model for prediction and evaluaion since the dict input is supported in `jit.trace`. + +For PyTorch < 1.14.0, JIT-mode could benefit a model if its forward parameter order matches the tuple input order in `jit.trace`, such as a question-answering model. If the forward parameter order does not match the tuple input order in `jit.trace`, like a text classification model, `jit.trace` will fail and we are capturing this with the exception here to make it fallback. Logging is used to notify users. + + + +## IPEX graph optimization + +Intel® Extension for PyTorch (IPEX) provides further optimizations in JIT mode for Intel CPUs, and we recommend combining it with TorchScript for even faster performance. The IPEX [graph optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html) fuses operations like Multi-head attention, Concat Linear, Linear + Add, Linear + Gelu, Add + LayerNorm, and more. + +To take advantage of these graph optimizations, make sure you have IPEX [installed](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html): + +```bash +pip install intel_extension_for_pytorch +``` -- Inference with IPEX using jit mode on CPU: -
python run_qa.py \
+Set the `--use_ipex` and `--jit_mode_eval` flags in the [`Trainer`] class to enable JIT mode with the graph optimizations:
+
+```bash
+python run_qa.py \
 --model_name_or_path csarron/bert-base-uncased-squad-v1 \
 --dataset_name squad \
 --do_eval \
@@ -71,5 +94,34 @@ Take an example of the use cases on [Transformers question-answering](https://gi
 --doc_stride 128 \
 --output_dir /tmp/ \
 --no_cuda \
---use_ipex \
---jit_mode_eval
+--use_ipex \ +--jit_mode_eval +``` + +## 🤗 Optimum + + + +Learn more details about using ORT with 🤗 Optimum in the [Optimum Inference with ONNX Runtime](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models) guide. This section only provides a brief and simple example. + + + +ONNX Runtime (ORT) is a model accelerator that runs inference on CPUs by default. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers, without making too many changes to your code. You only need to replace the 🤗 Transformers `AutoClass` with its equivalent [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and load a checkpoint in the ONNX format. + +For example, if you're running inference on a question answering task, load the [optimum/roberta-base-squad2](https://huggingface.co/optimum/roberta-base-squad2) checkpoint which contains a `model.onnx` file: + +```py +from transformers import AutoTokenizer, pipeline +from optimum.onnxruntime import ORTModelForQuestionAnswering + +model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2") +tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") + +onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer) + +question = "What's my name?" +context = "My name is Philipp and I live in Nuremberg." +pred = onnx_qa(question, context) +``` + +If you have an Intel CPU, take a look at 🤗 [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) which supports a variety of compression techniques (quantization, pruning, knowledge distillation) and tools for converting models to the [OpenVINO](https://huggingface.co/docs/optimum/intel/inference) format for higher performance inference. diff --git a/docs/source/en/perf_infer_gpu_many.md b/docs/source/en/perf_infer_gpu_many.md deleted file mode 100644 index 2118b5ddb40431..00000000000000 --- a/docs/source/en/perf_infer_gpu_many.md +++ /dev/null @@ -1,124 +0,0 @@ - - -# Efficient Inference on a Multiple GPUs - -This document contains information on how to efficiently infer on a multiple GPUs. - - -Note: A multi GPU setup can use the majority of the strategies described in the [single GPU section](./perf_infer_gpu_one). You must be aware of simple techniques, though, that can be used for a better usage. - - - -## Flash Attention 2 - -Flash Attention 2 integration also works in a multi-GPU setup, check out the appropriate section in the [single GPU section](./perf_infer_gpu_one#Flash-Attention-2) - -## BetterTransformer - -[BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview) converts 🤗 Transformers models to use the PyTorch-native fastpath execution, which calls optimized kernels like Flash Attention under the hood. - -BetterTransformer is also supported for faster inference on single and multi-GPU for text, image, and audio models. - - - -Flash Attention can only be used for models using fp16 or bf16 dtype. Make sure to cast your model to the appropriate dtype before using BetterTransformer. - - - -### Decoder models - -For text models, especially decoder-based models (GPT, T5, Llama, etc.), the BetterTransformer API converts all attention operations to use the [`torch.nn.functional.scaled_dot_product_attention` operator](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) (SDPA) that is only available in PyTorch 2.0 and onwards. - -To convert a model to BetterTransformer: - -```python -from transformers import AutoModelForCausalLM - -model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") -# convert the model to BetterTransformer -model.to_bettertransformer() - -# Use it for training or inference -``` - -SDPA can also call [Flash Attention](https://arxiv.org/abs/2205.14135) kernels under the hood. To enable Flash Attention or to check that it is available in a given setting (hardware, problem size), use [`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel) as a context manager: - - -```diff -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer - -tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") -model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m").to("cuda") -# convert the model to BetterTransformer -model.to_bettertransformer() - -input_text = "Hello my dog is cute and" -inputs = tokenizer(input_text, return_tensors="pt").to("cuda") - -+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): - outputs = model.generate(**inputs) - -print(tokenizer.decode(outputs[0], skip_special_tokens=True)) -``` - -If you see a bug with a traceback saying - -```bash -RuntimeError: No available kernel. Aborting execution. -``` - -try using the PyTorch nightly version, which may have a broader coverage for Flash Attention: - -```bash -pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 -``` - -Have a look at this [blog post](https://pytorch.org/blog/out-of-the-box-acceleration/) to learn more about what is possible with the BetterTransformer + SDPA API. - -### Encoder models - -For encoder models during inference, BetterTransformer dispatches the forward call of encoder layers to an equivalent of [`torch.nn.TransformerEncoderLayer`](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) that will execute the fastpath implementation of the encoder layers. - -Because `torch.nn.TransformerEncoderLayer` fastpath does not support training, it is dispatched to `torch.nn.functional.scaled_dot_product_attention` instead, which does not leverage nested tensors but can use Flash Attention or Memory-Efficient Attention fused kernels. - -More details about BetterTransformer performance can be found in this [blog post](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2), and you can learn more about BetterTransformer for encoder models in this [blog](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/). - - -## Advanced usage: mixing FP4 (or Int8) and BetterTransformer - -You can combine the different methods described above to get the best performance for your model. For example, you can use BetterTransformer with FP4 mixed-precision inference + flash attention: - -```py -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig - -quantization_config = BitsAndBytesConfig( - load_in_4bit=True, - bnb_4bit_compute_dtype=torch.float16 -) - -tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") -model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config) - -input_text = "Hello my dog is cute and" -inputs = tokenizer(input_text, return_tensors="pt").to("cuda") - -with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): - outputs = model.generate(**inputs) - -print(tokenizer.decode(outputs[0], skip_special_tokens=True)) -``` \ No newline at end of file diff --git a/docs/source/en/perf_infer_gpu_one.md b/docs/source/en/perf_infer_gpu_one.md index 39f2ca22b1f040..06e91c5502266e 100644 --- a/docs/source/en/perf_infer_gpu_one.md +++ b/docs/source/en/perf_infer_gpu_one.md @@ -13,40 +13,38 @@ rendered properly in your Markdown viewer. --> -# Efficient Inference on a Single GPU +# GPU inference -In addition to this guide, relevant information can be found as well in [the guide for training on a single GPU](perf_train_gpu_one) and [the guide for inference on CPUs](perf_infer_cpu). - -## Flash Attention 2 +GPUs are the standard choice of hardware for machine learning, unlike CPUs, because they are optimized for memory bandwidth and parallelism. To keep up with the larger sizes of modern models or to run these large models on existing and older hardware, there are several optimizations you can use to speed up GPU inference. In this guide, you'll learn how to use FlashAttention-2 (a more memory-efficient attention mechanism), BetterTransformer (a PyTorch native fastpath execution), and bitsandbytes to quantize your model to a lower precision. Finally, learn how to use 🤗 Optimum to accelerate inference with ONNX Runtime on Nvidia GPUs. -Note that this feature is experimental and might considerably change in future versions. For instance, the Flash Attention 2 API might migrate to `BetterTransformer` API in the near future. +The majority of the optimizations described here also apply to multi-GPU setups! -Flash Attention 2 can considerably speed up transformer-based models' training and inference speed. Flash Attention 2 has been introduced in the [official Flash Attention repository](https://github.com/Dao-AILab/flash-attention) by Tri Dao et al. The scientific paper on Flash Attention can be found [here](https://arxiv.org/abs/2205.14135). +## FlashAttention-2 -Make sure to follow the installation guide on the repository mentioned above to properly install Flash Attention 2. Once that package is installed, you can benefit from this feature. + -We natively support Flash Attention 2 for the following models: +FlashAttention-2 is experimental and may change considerably in future versions. -- Llama -- Mistral -- Falcon -- [GPTBigCode (Starcoder)](model_doc/gpt_bigcode#) + -You can request to add Flash Attention 2 support for more models by opening an issue on GitHub, and even open a Pull Request to integrate the changes. The supported models can be used for inference and training, including training with padding tokens - *which is currently not supported for `BetterTransformer` API below.* +[FlashAttention-2](https://huggingface.co/papers/2205.14135) is a faster and more efficient implementation of the standard attention mechanism that can significantly speedup inference by: - +1. additionally parallelizing the attention computation over sequence length +2. partitioning the work between GPU threads to reduce communication and shared memory reads/writes between them -Flash Attention 2 can only be used when the models' dtype is `fp16` or `bf16` and runs only on NVIDIA-GPU devices. Make sure to cast your model to the appropriate dtype and load them on a supported device before using that feature. - - +FlashAttention-2 supports inference with Llama, Mistral, and Falcon models. You can request to add FlashAttention-2 support for another model by opening a GitHub Issue or Pull Request. -### Quick usage +Before you begin, make sure you have FlashAttention-2 installed (see the [installation](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#installation-and-features) guide for more details about prerequisites): -To enable Flash Attention 2 in your model, add `use_flash_attention_2` in the `from_pretrained` arguments: +```bash +pip install flash-attn --no-build-isolation +``` + +To enable FlashAttention-2, add the `use_flash_attention_2` parameter to [`~AutoModelForCausalLM.from_pretrained`]: ```python import torch @@ -62,74 +60,29 @@ model = AutoModelForCausalLM.from_pretrained( ) ``` -And use it for generation or fine-tuning. - -### Expected speedups - -You can benefit from considerable speedups for fine-tuning and inference, especially for long sequences. However, since Flash Attention does not support computing attention scores with padding tokens under the hood, we must manually pad / unpad the attention scores for batched inference when the sequence contains padding tokens. This leads to a significant slowdown for batched generations with padding tokens. - -To overcome this, one should use Flash Attention without padding tokens in the sequence for training (e.g., by packing a dataset, i.e., concatenating sequences until reaching the maximum sequence length. An example is provided [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516). - -Below is the expected speedup you can get for a simple forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes, without padding tokens: - -
- -
- -Below is the expected speedup you can get for a simple forward pass on [`meta-llama/Llama-7b-hf`](https://hf.co/meta-llama/Llama-7b-hf) with a sequence length of 4096 and various batch sizes, without padding tokens: - -
- -
- -For sequences with padding tokens (training with padding tokens or generating with padding tokens), we need to unpad / pad the input sequences to compute correctly the attention scores. For relatively small sequence length, on pure forward pass, this creates an overhead leading to a small speedup (below 30% of the input has been filled with padding tokens). - -
- -
- -But for large sequence length you can benefit from interesting speedup for pure inference (also training) - -Note that Flash Attention makes the attention computation more memory efficient, meaning you can train with much larger sequence lengths without facing CUDA OOM issues. It can lead up to memory reduction up to 20 for large sequence length. Check out [the official flash attention repository](https://github.com/Dao-AILab/flash-attention) for more details. - -
- -
- - -### Advanced usage - -You can combine this feature with many exisiting feature for model optimization. Check out few examples below: + -### Combining Flash Attention 2 and 8-bit models +FlashAttention-2 can only be used when the model's dtype is `fp16` or `bf16`, and it only runs on Nvidia GPUs. Make sure to cast your model to the appropriate dtype and load them on a supported device before using FlashAttention-2. + + -You can combine this feature together with 8-bit quantization: +FlashAttention-2 can be combined with other optimization techniques like quantization to further speedup inference. For example, you can combine FlashAttention-2 with 8-bit or 4-bit quantization: -```python +```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) +# load in 8bit model = AutoModelForCausalLM.from_pretrained( model_id, load_in_8bit=True, use_flash_attention_2=True, ) -``` - -### Combining Flash Attention 2 and 4-bit models - -You can combine this feature together with 4-bit quantization: - -```python -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM - -model_id = "tiiuae/falcon-7b" -tokenizer = AutoTokenizer.from_pretrained(model_id) +# load in 4bit model = AutoModelForCausalLM.from_pretrained( model_id, load_in_4bit=True, @@ -137,85 +90,77 @@ model = AutoModelForCausalLM.from_pretrained( ) ``` -### Combining Flash Attention 2 and PEFT - -You can combine this feature together with PEFT for training adapters using Flash Attention 2 under the hood: +### Expected speedups -```python -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM -from peft import LoraConfig +You can benefit from considerable speedups for inference, especially for inputs with long sequences. However, since FlashAttention-2 does not support computing attention scores with padding tokens, you must manually pad/unpad the attention scores for batched inference when the sequence contains padding tokens. This leads to a significant slowdown for batched generations with padding tokens. -model_id = "tiiuae/falcon-7b" -tokenizer = AutoTokenizer.from_pretrained(model_id) +To overcome this, you should use FlashAttention-2 without padding tokens in the sequence during training (by packing a dataset or [concatenating sequences](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516) until reaching the maximum sequence length). -model = AutoModelForCausalLM.from_pretrained( - model_id, - load_in_4bit=True, - use_flash_attention_2=True, -) +For a single forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is: -lora_config = LoraConfig( - r=8, - task_type="CAUSAL_LM" -) +
+ +
-model.add_adapter(lora_config) +For a single forward pass on [meta-llama/Llama-7b-hf](https://hf.co/meta-llama/Llama-7b-hf) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is: -... # train your model -``` +
+ +
-## BetterTransformer +For sequences with padding tokens (generating with padding tokens), you need to unpad/pad the input sequences to correctly compute the attention scores. With a relatively small sequence length, a single forward pass creates overhead leading to a small speedup (in the example below, 30% of the input is filled with padding tokens): -[BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview) converts 🤗 Transformers models to use the PyTorch-native fastpath execution, which calls optimized kernels like Flash Attention under the hood. +
+ +
-BetterTransformer is also supported for faster inference on single and multi-GPU for text, image, and audio models. +But for larger sequence lengths, you can expect even more speedup benefits: -Flash Attention can only be used for models using fp16 or bf16 dtype. Make sure to cast your model to the appropriate dtype before using BetterTransformer. - - +FlashAttention is more memory efficient, meaning you can train on much larger sequence lengths without running into out-of-memory issues. You can potentially reduce memory usage up to 20x for larger sequence lengths. Take a look at the [flash-attention](https://github.com/Dao-AILab/flash-attention) repository for more details. -### Encoder models + -PyTorch-native [`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) attention fastpath, called BetterTransformer, can be used with Transformers through the integration in the [🤗 Optimum library](https://huggingface.co/docs/optimum/bettertransformer/overview). +
+ +
-PyTorch's attention fastpath allows to speed up inference through kernel fusions and the use of [nested tensors](https://pytorch.org/docs/stable/nested.html). Detailed benchmarks can be found in [this blog post](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2). +## BetterTransformer -After installing the [`optimum`](https://github.com/huggingface/optimum) package, to use Better Transformer during inference, the relevant internal modules are replaced by calling [`~PreTrainedModel.to_bettertransformer`]: + -```python -model = model.to_bettertransformer() -``` +Check out our benchmarks with BetterTransformer and scaled dot product attention in the [Out of the box acceleration and memory savings of 🤗 decoder models with PyTorch 2.0](https://pytorch.org/blog/out-of-the-box-acceleration/) and learn more about the fastpath execution in the [BetterTransformer](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2) blog post. -The method [`~PreTrainedModel.reverse_bettertransformer`] allows to go back to the original modeling, which should be used before saving the model in order to use the canonical transformers modeling: + -```python -model = model.reverse_bettertransformer() -model.save_pretrained("saved_model") -``` +BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: -Have a look at this [blog post](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2) to learn more about what is possible to do with `BetterTransformer` API for encoder models. +1. fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps +2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors -### Decoder models +BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention (SDPA)](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention), and it calls optimized kernels like [FlashAttention](https://huggingface.co/papers/2205.14135) under the hood. -For text models, especially decoder-based models (GPT, T5, Llama, etc.), the BetterTransformer API converts all attention operations to use the [`torch.nn.functional.scaled_dot_product_attention` operator](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) (SDPA) that is only available in PyTorch 2.0 and onwards. +Before you start, make sure you have 🤗 Optimum [installed](https://huggingface.co/docs/optimum/installation). -To convert a model to BetterTransformer: +Then you can enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method: ```python -from transformers import AutoModelForCausalLM +model = model.to_bettertransformer() +``` -model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m") -# convert the model to BetterTransformer -model.to_bettertransformer() +You can return the original Transformers model with the [`~PreTrainedModel.reverse_bettertransformer`] method. You should use this before saving your model to use the canonical Transformers modeling: -# Use it for training or inference +```py +model = model.reverse_bettertransformer() +model.save_pretrained("saved_model") ``` -SDPA can also call [Flash Attention](https://arxiv.org/abs/2205.14135) kernels under the hood. To enable Flash Attention or to check that it is available in a given setting (hardware, problem size), use [`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel) as a context manager: +### FlashAttention +SDPA can also call FlashAttention kernels under the hood. FlashAttention can only be used for models using the `fp16` or `bf16` dtype, so make sure to cast your model to the appropriate dtype before using it. + +To enable FlashAttention or to check whether it is available in a given setting (hardware, problem size), use [`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel) as a context manager: ```diff import torch @@ -235,47 +180,32 @@ inputs = tokenizer(input_text, return_tensors="pt").to("cuda") print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` -If you see a bug with a traceback saying +If you see a bug with the traceback below, try using nightly version of PyTorch which may have broader coverage for FlashAttention: ```bash -RuntimeError: No available kernel. Aborting execution. -``` - -try using the PyTorch nightly version, which may have a broader coverage for Flash Attention: +RuntimeError: No available kernel. Aborting execution. -```bash +# install PyTorch nightly pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 ``` -Or make sure your model is correctly casted in float16 or bfloat16 - - -Have a look at [this detailed blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to read more about what is possible to do with `BetterTransformer` + SDPA API. - -## `bitsandbytes` integration for FP4 mixed-precision inference - -You can install `bitsandbytes` and benefit from easy model compression on GPUs. Using FP4 quantization you can expect to reduce up to 8x the model size compared to its native full precision version. Check out below how to get started. - - - -Note that this feature can also be used in a multi GPU setup. - - +## bitsandbytes -### Requirements [[requirements-for-fp4-mixedprecision-inference]] +bitsandbytes is a quantization library that includes support for 4-bit and 8-bit quantization. Quantization reduces your model size compared to its native full precision version, making it easier to fit large models onto GPUs with limited memory. -- Latest `bitsandbytes` library -`pip install bitsandbytes>=0.39.0` +Make sure you have bitsnbytes and 🤗 Accelerate installed: -- Install latest `accelerate` from source -`pip install git+https://github.com/huggingface/accelerate.git` +```bash +# these versions support 8-bit and 4-bit +pip install bitsandbytes>=0.39.0 accelerate>=0.20.0 -- Install latest `transformers` from source -`pip install git+https://github.com/huggingface/transformers.git` +# install Transformers +pip install transformers +``` -### Running FP4 models - single GPU setup - Quickstart +### 4-bit -You can quickly run a FP4 model on a single GPU by running the following code: +To load a model in 4-bit for inference, use the `load_in_4bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `"auto"` to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment. ```py from transformers import AutoModelForCausalLM @@ -283,16 +213,8 @@ from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` -Note that `device_map` is optional but setting `device_map = 'auto'` is prefered for inference as it will dispatch efficiently the model on the available ressources. -### Running FP4 models - multi GPU setup - -The way to load your mixed 4-bit model in multiple GPUs is as follows (same command as single GPU setup): -```py -model_name = "bigscience/bloom-2b5" -model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) -``` -But you can control the GPU RAM you want to allocate on each GPU using `accelerate`. Use the `max_memory` argument as follows: +To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 600MB of memory to the first GPU and 1GB of memory to the second GPU: ```py max_memory_mapping = {0: "600MB", 1: "1GB"} @@ -301,44 +223,16 @@ model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping ) ``` -In this example, the first GPU will use 600MB of memory and the second 1GB. -### Advanced usage - -For more advanced usage of this method, please have a look at the [quantization](main_classes/quantization) documentation page. - -## `bitsandbytes` integration for Int8 mixed-precision matrix decomposition +### 8-bit -Note that this feature can also be used in a multi GPU setup. +If you're curious and interested in learning more about the concepts underlying 8-bit quantization, read the [Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration) blog post. -From the paper [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339), we support Hugging Face integration for all models in the Hub with a few lines of code. -The method reduces `nn.Linear` size by 2 for `float16` and `bfloat16` weights and by 4 for `float32` weights, with close to no impact to the quality by operating on the outliers in half-precision. - -![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) - -Int8 mixed-precision matrix decomposition works by separating a matrix multiplication into two streams: (1) a systematic feature outlier stream matrix multiplied in fp16 (0.01%), (2) a regular stream of int8 matrix multiplication (99.9%). With this method, int8 inference with no predictive degradation is possible for very large models. -For more details regarding the method, check out the [paper](https://arxiv.org/abs/2208.07339) or our [blogpost about the integration](https://huggingface.co/blog/hf-bitsandbytes-integration). - -![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) - -Note, that you would require a GPU to run mixed-8bit models as the kernels have been compiled for GPUs only. Make sure that you have enough GPU memory to store the quarter (or half if your model weights are in half precision) of the model before using this feature. -Below are some notes to help you use this module, or follow the demos on [Google colab](#colab-demos). - -### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]] - -- If you have `bitsandbytes<0.37.0`, make sure you run on NVIDIA GPUs that support 8-bit tensor cores (Turing, Ampere or newer architectures - e.g. T4, RTX20s RTX30s, A40-A100). For `bitsandbytes>=0.37.0`, all GPUs should be supported. -- Install the correct version of `bitsandbytes` by running: -`pip install bitsandbytes>=0.31.5` -- Install `accelerate` -`pip install accelerate>=0.12.0` - -### Running mixed-Int8 models - single GPU setup - -After installing the required libraries, the way to load your mixed 8-bit model is as follows: +To load a model in 8-bit for inference, use the `load_in_8bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `"auto"` to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment: ```py from transformers import AutoModelForCausalLM @@ -347,12 +241,7 @@ model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` -For text generation, we recommend: - -* using the model's `generate()` method instead of the `pipeline()` function. Although inference is possible with the `pipeline()` function, it is not optimized for mixed-8bit models, and will be slower than using the `generate()` method. Moreover, some sampling strategies are like nucleaus sampling are not supported by the `pipeline()` function for mixed-8bit models. -* placing all inputs on the same device as the model. - -Here is a simple example: +If you're loading a model in 8-bit for text generation, you should use the [`~transformers.GenerationMixin.generate`] method instead of the [`Pipeline`] function which is not optimized for 8-bit models and will be slower. Some sampling strategies, like nucleus sampling, are also not supported by the [`Pipeline`] for 8-bit models. You should also place all inputs on the same device as the model: ```py from transformers import AutoModelForCausalLM, AutoTokenizer @@ -367,15 +256,7 @@ generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` - -### Running mixed-int8 models - multi GPU setup - -The way to load your mixed 8-bit model in multiple GPUs is as follows (same command as single GPU setup): -```py -model_name = "bigscience/bloom-2b5" -model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) -``` -But you can control the GPU RAM you want to allocate on each GPU using `accelerate`. Use the `max_memory` argument as follows: +To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} @@ -384,27 +265,56 @@ model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` -In this example, the first GPU will use 1GB of memory and the second 2GB. -### Colab demos + -With this method you can infer on models that were not possible to infer on a Google Colab before. -Check out the demo for running T5-11b (42GB in fp32)! Using 8-bit quantization on Google Colab: +Feel free to try running a 11 billion parameter [T5 model](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) or the 3 billion parameter [BLOOM model](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) for inference on Google Colab's free tier GPUs! -[![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) + -Or this demo for BLOOM-3B: +## 🤗 Optimum -[![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) + + +Learn more details about using ORT with 🤗 Optimum in the [Accelerated inference on NVIDIA GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#accelerated-inference-on-nvidia-gpus) guide. This section only provides a brief and simple example. + + + +ONNX Runtime (ORT) is a model accelerator that supports accelerated inference on Nvidia GPUs. ORT uses optimization techniques like fusing common operations into a single node and constant folding to reduce the number of computations performed and speedup inference. ORT also places the most computationally intensive operations on the GPU and the rest on the CPU to intelligently distribute the workload between the two devices. -## Advanced usage: mixing FP4 (or Int8) and BetterTransformer +ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers. You'll need to use an [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and specify the `provider` parameter which can be set to either [`CUDAExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#cudaexecutionprovider) or [`TensorrtExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). If you want to load a model that was not yet exported to ONNX, you can set `export=True` to convert your model on-the-fly to the ONNX format : + +```py +from optimum.onnxruntime import ORTModelForSequenceClassification + +ort_model = ORTModelForSequenceClassification.from_pretrained( + "distilbert-base-uncased-finetuned-sst-2-english", + export=True, + provider="CUDAExecutionProvider", +) +``` -You can combine the different methods described above to get the best performance for your model. For example, you can use BetterTransformer with FP4 mixed-precision inference + flash attention: +Now you're free to use the model for inference: + +```py +from optimum.pipelines import pipeline +from transformers import AutoTokenizer + +tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") + +pipeline = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0") +result = pipeline("Both the music and visual were astounding, not to mention the actors performance.") +``` + +## Combine optimizations + +It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention: ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig +# load model in 4-bit quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 @@ -413,9 +323,13 @@ quantization_config = BitsAndBytesConfig( tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config) +# enable BetterTransformer +model = model.to_bettertransformer() + input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") +# enable FlashAttention with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) diff --git a/docs/source/en/perf_infer_special.md b/docs/source/en/perf_infer_special.md deleted file mode 100644 index e5744754b88e0b..00000000000000 --- a/docs/source/en/perf_infer_special.md +++ /dev/null @@ -1,18 +0,0 @@ - - -# Inference on Specialized Hardware - -This document will be completed soon with information on how to infer on specialized hardware. In the meantime you can check out [the guide for inference on CPUs](perf_infer_cpu). \ No newline at end of file diff --git a/docs/source/en/performance.md b/docs/source/en/performance.md index a1661a6ba5a88b..ccd78d326d52e3 100644 --- a/docs/source/en/performance.md +++ b/docs/source/en/performance.md @@ -53,7 +53,7 @@ sections we go through the steps to run inference on CPU and single/multi-GPU se * [Inference on a single CPU](perf_infer_cpu) * [Inference on a single GPU](perf_infer_gpu_one) -* [Multi-GPU inference](perf_infer_gpu_many) +* [Multi-GPU inference](perf_infer_gpu_one) * [XLA Integration for TensorFlow Models](tf_xla) diff --git a/src/transformers/utils/logging.py b/src/transformers/utils/logging.py index d732e5bd37a94a..276fa6e8f85564 100644 --- a/src/transformers/utils/logging.py +++ b/src/transformers/utils/logging.py @@ -30,9 +30,7 @@ WARN, # NOQA WARNING, # NOQA ) -from logging import ( - captureWarnings as _captureWarnings, -) +from logging import captureWarnings as _captureWarnings from typing import Optional import huggingface_hub.utils as hf_hub_utils diff --git a/utils/not_doctested.txt b/utils/not_doctested.txt index fe25c2e59a2367..31cda5fd76c6be 100644 --- a/utils/not_doctested.txt +++ b/utils/not_doctested.txt @@ -272,9 +272,7 @@ docs/source/en/pad_truncation.md docs/source/en/peft.md docs/source/en/perf_hardware.md docs/source/en/perf_infer_cpu.md -docs/source/en/perf_infer_gpu_many.md docs/source/en/perf_infer_gpu_one.md -docs/source/en/perf_infer_special.md docs/source/en/perf_torch_compile.md docs/source/en/perf_train_cpu.md docs/source/en/perf_train_cpu_many.md