-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use 🧨diffusers model #1583
use 🧨diffusers model #1583
Conversation
fixed, fix was "read the instructions in the top of the PR about putting a |
looks like this needs |
Seems to have worked on mps here with torch 1.12: https://github.com/invoke-ai/InvokeAI/actions/runs/3559296062/jobs/5978550236 Does torch 1.13 on mac perform any better with this diffusers implementation? Or is it still much much slower than torch 1.12 with the old implementation? |
that's actually x64, not m1/mps (check the
it seems slow but i haven't paid too much attention |
|
yeah, hmm, I think you're right. Well that makes it very misleading to have a check named mac-mps-cpu |
there is shared memory shenanigans going on in |
ok there's some deep in the weeds bug in pytorch, because:
result = all zeros
result = looks reasonable i don't know why this is happening and i don't know what to do about it |
Some change I just pulled in from If I'm reading things correctly, InvokeAI/configs/stable-diffusion/v1-inference.yaml Lines 29 to 30 in 8423be5
The model configs like https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json have no such Does it feel necessary to have that be data-driven, or is that class reference something we can hardcode? |
That's weird. Both on the same device and same dtype? |
It was a kludge in one place before, but now it's spreading. Should probably the the next thing we tackle in this branch. |
I had it create an EmbeddingManager. I am not sure if it's working yet but at least it's back to not-crashing. Also pushed a couple of fixes for deprecated diffusers things, cleaning up some of the warning messages it was spewing. |
realising i never answered your question about the personalization_config keturn - yes, i think it can be hardcoded |
oof. re-resolving all the conflicts after the entirety of 2.2 was rebased was a doozy, but I think I did it okay. |
🚧 PLZ HOLD. DO NOT PUSH TO THIS BRANCH FOR A BIT, I WILL NEED TO FORCE-PUSH IT. 🚧 oh fiddlesticks. this branch was based off of but that history all got squashed away when it merged in to that means I'm going to have to rebase this branch on |
and update associated things in Generate & Generator to not instantly fail when that happens
Remove IPNDM scheduler; it is not behaving.
- put try: blocks around places where the system tries to load an embedding which is incompatible with the currently loaded model
- Preferences are stored in a file named text-inversion-training/preferences.conf - Currently the resume-from-checkpoint option is not working correctly. Possible bug in textual_inversion_training.py?
- Front end doesn't do anything yet!!!! - Made change to model name parsing in CLI to support ability to have merged models with the "+" character in their names.
- recommend ckpt version of inpainting-1.5 to user - fix get_noise() bug in ckpt version of omnibus.py
- update scripts will now fetch new INITIAL_MODELS.yaml so that configure_invokeai.py will know about the diffusers versions.
- added configure_invokeai.py to menu - menu defaults to browser-based invoke
- Add information on how formats have changed and the upgrade process. - Add short bug list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm ready to merge this in.
Re: As far as 2.1 - I believe that might just be model quality, but we should be able to confirm by testing w/ a different tool, if anyone can offer to do that. |
…odel (#2367) This PR attempts to fix `--free_gpu_mem` option that was not working in CKPT-based diffuser model after #1583. I noticed that the memory usage after #1583 did not decrease after generating an image when `--free_gpu_mem` option was enabled. It turns out that the option was not propagated into `Generator` instance, hence the generation will always run without the memory saving procedure. This PR also related to #2326. Initially, I was trying to make `--free_gpu_mem` works on 🤗 diffuser model as well. In the process, I noticed that InvokeAI will raise an exception when `--free_gpu_mem` is enabled. I tried to quickly fix it by simply ignoring the exception and produce a warning message to user's console.
The goal is to reduce the amount of model code InvokeAI has to maintain by integrating https://github.com/huggingface/diffusers , using that to replace the existing
ldm
(descended from the original CompVis implementation).I think the plan is that we keep the public APIs in
ldm.invoke.generator
stable while swapping out the implementations to be diffusers-based.Discord discussion thread: https://discord.com/channels/1020123559063990373/1031668022294884392
[This is a continuation of #1384. The branch is now hosted in the InvokeAI repo instead of a fork for easier collaboration.]
Usage
Add a section to your
models.yaml
like this:Note the
format: diffusers
.The
repo_id
is as it appears on huggingface.co.Sub-Tasks
i.e. things keturn would love to delegate.
To Do: txt2img
invoke_ai_web_server
. Not sure if the other instances are still in use?karras_max
?waiting on upstream diffusers
models.diffusion.cross_attention_control
might be an obstacle, as that's not in stockdiffusers
yet and it meddles with some internals. The prompt-to-prompt authors do have a reference implementation that uses diffusers: https://nbviewer.org/github/google/prompt-to-prompt/blob/main/prompt-to-prompt_stable.ipynbTo Do: img2img
--inpaint_replace
now that--strength
works. (Or should it apply to the results of the infill method?)To Do: txt2img2img (high-res optimization)
To Do: inpainting
To Do: embiggen
Stable Diffusion 2.x support
(I think we can merge the PR without this, but we'll want it before release.)