Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding implementation #13

Open
a-r-r-o-w opened this issue Dec 26, 2024 · 2 comments
Open

Questions regarding implementation #13

a-r-r-o-w opened this issue Dec 26, 2024 · 2 comments

Comments

@a-r-r-o-w
Copy link

a-r-r-o-w commented Dec 26, 2024

Hey 👋

I'm Aryan from the HuggingFace Diffusers team. I am working on integrating FasterCache into the library to make it available for all the video models we support. I had some questions regarding the implementation and was hoping to get some help.

In the paper, the section describing CFG Cache has the following:

These biases ensure that both high- and low-frequency differences are accurately captured and compensated during the reuse process. In the subsequent n timesteps (from t − 1 to t − n), we infer only the outputs of the conditional branches and compute the unconditional outputs using the cached ∆HF and ∆LF as follows:

It says that inference is run for the conditional branch, and outputs for the unconditional branch are computed with the given equations. This is the relevant lines of code that seems to be doing what is mentioned:

single_output = self.fastercache_model_single_forward(hidden_states[:1],timestep[:1],encoder_hidden_states[:1],added_cond_kwargs,class_labels,cross_attention_kwargs,attention_mask,encoder_attention_mask,use_image_num,enable_temporal_attentions,return_dict)[0]

However, the indexing of the inputs is done as hidden_states[:1],timestep[:1],encoder_hidden_states[:1]. Isn't this corresponding to the unconditional inputs instead of conditional inputs? I think it is unconditional because the order of concatenation of prompts embeds is like: (negative_prompt_embeds, prompt_embeds) here.

Is this incorrect by any chance? Or is unconditional branch being used for approximating output of conditional branch?

Thank you for your time! 🤗

cc @cszy98 @ChenyangSi

@cszy98
Copy link
Collaborator

cszy98 commented Dec 27, 2024

Thank you for pointing this out and for your detailed observation. The indexing in the code does differ slightly from the description in the paper. I’ll update the implementation to ensure it’s fully aligned with the methodology described. Since the CFG-Cache stores the delta between the conditional and unconditional branches, this change will lead to a slight visual quality improvement in some cases.

Thanks again for your careful review. If you have any further questions or suggestions regarding FasterCache, feel free to let us know. We will do our best to assist.

@a-r-r-o-w
Copy link
Author

Thank you so much for confirming! We're working on the integration here: huggingface/diffusers#10163

I would love to have your reviews once it's ready for testing (will ping you) 🤗 Looking forward to more amazing research from you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants