forked from lllyasviel/Fooocus
-
Notifications
You must be signed in to change notification settings - Fork 21
Flux ‐ GGUF and unet safetensors
yownas edited this page Sep 9, 2024
·
8 revisions
Ruined Fooocus has support for quantized GGUF Flux models that you can find here city96/FLUX.1-dev-gguf and city96/FLUX.1-schnell-gguf, and some of the Flux models found on CivitAI that only contain the Unet part.
Since these are missing clip, t5 and the vae, you need to download these:
-
comfyanonymous/flux_text_encoders - clip_l.safetensors, place in
models\clip
-
city96/t5-v1_1-xxl-encoder-gguf - t5-v1_1-xxl-encoder-Q3_K_S.gguf ¹, put in
models\clip
-
black-forest-labs/FLUX.1-schnell - ae.safetensors, in
models\vae
. This one will work for both Dev and Schnell
¹ t5-v1_1-xxl-encoder-Q3_K_S.gguf is the smallest and the one used as default. You can change any of these by editing settings\settings.json
Example:
"gguf_clip1": "flux_clip_l.safetensors",
"gguf_clip2": "t5-v1_1-xxl-encoder-Q6_K.gguf",
"gguf_vae": "ae.safetensors"
(Make sure you don't misplace the commas at the end of the rows.)
You should now be able to use GGUF models and Flux safetensors that are missing clip, t5 and vae.
Some models that should work:
- city96/FLUX.1-dev-gguf - Any of these
- city96/FLUX.1-schnell-gguf - Any of these.
There are also models that contain everything and will work out-of-the-box:
- A collection of models in this discussion.