Replies: 1 comment
-
Hi, I just opened a PR that should address this. By default, if you don't use the WebuiCheckpointLoader node, then you won't get the memory penalty of loading 2 checkpoints in memory. Also, as a workaround, it will be possible to remove the webui unet copy by reloading the gradio interface. It isn't the best fix, but it should allow the extension to work at a basic level. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! Long story short, I have an RTX 3080 10G.
Obviously, that's not enough VRAM to run SDXL (fp16 pruned, via Comfy) and hold some random 1.5 model (also pruned, still loaded in a1111) without OOM or swapping to the RAM, which either halts the workflow entirely or slows it down to a crawl. It is enough to run any of those separately, though.
So the question is, is there a way to flush the model loaded by A1111 when I use ComfyUI through this tab, or, when it comes to the SDXL, the extension becomes a 24GB VRAM exclusive?
Beta Was this translation helpful? Give feedback.
All reactions