-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks! Works well on windows #13
Comments
Thanks for your feedback. |
@ai-anchorite Can't get it to run well under windows. Tried torch 2.3.1+cu121 and xformers 0.0.27. |
That model loading behaviour and low vram during inference is consistent with CPU offload. Inference, including vae decode etc, consistently takes me 20 minutes. less than 64GB RAM may push it into page file though? I forked to make available to install on windows via PInokio here. No real changes besides uncommenting the 4 optimisations, and removing HF Spaces. installed with torch 2.3.1+cu121 and xformers 0.0.27 in python 3.10 |
@ai-anchorite Thanks for your quick feedback. I was too impatient. Despite some console errors it runs well. |
@ai-anchorite Thanks @SHYuanBest for the great work! |
ah nice! i'd been meaning to find time to test that. |
Thanks for releasing the models and including a full-featured Gradio UI!
The included
app.py
works well on my 3090 with all 4 mem optimizations active. Generation takes ~20 minutes.Installed using python 3.10 with torch 2.3.1+cu121
The text was updated successfully, but these errors were encountered: