-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train_custom_diffusion.py
does not support fp16
#5502
Comments
@DN6 here is some basic analysis. I just check the code using hook of each layer, it seems the problems happens in and if I narrow the margin of error, the problem seems to happen in hidden_states = F.scaled_dot_product_attention(
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
) and for query
and for key and value , both have fllowing feature
both
and the Does anyoen meet same problem before? |
Hi @jiaqiw09 I was able to reproduce the issue. It seems like the text encoder isn't producing the |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
when I run train_custom_diffusion.py by following args, there will be a bug report
The text was updated successfully, but these errors were encountered: