From d960c83ad2fa76d9bbf5bf431a69e7f9803e777f Mon Sep 17 00:00:00 2001 From: Ziyu Guo Date: Tue, 19 Nov 2024 07:49:13 +0000 Subject: [PATCH] Update README with Arxiv Link --- README.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index fad473b..21fa957 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,8 @@ # Introduction We introduce **SmoothCache**, a straightforward acceleration technique for DiT architecture models, that's both **training-free, flexible and performant**. By leveraging layer-wise representation error, our method identifies redundancies in the diffusion process, generates a static caching scheme to reuse output featuremaps and therefore reduces the need for computationally expensive operations. This solution works across different models and modalities, can be easily dropped into existing Diffusion Transformer pipelines, can be stacked on different solvers, and requires no additional training or datasets. **SmoothCache** consistently outperforms various solvers designed to accelerate the diffusion process, while matching or surpassing the performance of existing modality-specific caching techniques. +> 🥯[[Arxiv]](https://arxiv.org/abs/2411.10510) + ![Illustration of SmoothCache. When the layer representation loss obtained from the calibration pass is below some threshold α, the corresponding layer is cached and used in place of the same computation on a future timestep. The figure on the left shows how the layer representation error impacts whether certain layers are eligible for caching. The error of the attention (attn) layer is higher in earlier timesteps, so our schedule caches the later timesteps accordingly. The figure on the right shows the application of the caching schedule to the DiT-XL architecture. The output of the attn layer at time t − 1 is cached and re-used in place of computing FFN t − 2, since the corresponding error is below α. This cached output is introduced in the model using the properties of the residual connection.](assets/SmoothCache2.png) ## Quick Start @@ -26,7 +28,7 @@ pip install SmoothCache ### Usage -We have implemented drop-in SmoothCache helper classes that easily applies to [Huggingface Diffuser DiTPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/dit), and [original DiT implementations](https://github.com/facebookresearch/DiT). +Inspired by [DeepCache](https://raw.githubusercontent.com/horseee/DeepCache), we have implemented drop-in SmoothCache helper classes that easily applies to [Huggingface Diffuser DiTPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/dit), and [original DiT implementations](https://github.com/facebookresearch/DiT). Generally, only 3 additional lines needs to be added to the original sampler scripts: ```python @@ -156,4 +158,17 @@ Note that L2C is not training free](assets/table1.png) # License -SmoothCache is licensed under the [Apache-2.0](LICENSE) license. \ No newline at end of file +SmoothCache is licensed under the [Apache-2.0](LICENSE) license. + +## Bibtex +``` +@misc{liu2024smoothcacheuniversalinferenceacceleration, + title={SmoothCache: A Universal Inference Acceleration Technique for Diffusion Transformers}, + author={Joseph Liu and Joshua Geddes and Ziyu Guo and Haomiao Jiang and Mahesh Kumar Nandwana}, + year={2024}, + eprint={2411.10510}, + archivePrefix={arXiv}, + primaryClass={cs.LG}, + url={https://arxiv.org/abs/2411.10510}, +} +``` \ No newline at end of file