Skip to content

Flux ‐ GGUF and unet safetensors

yownas edited this page Sep 9, 2024 · 8 revisions

Ruined Fooocus has support for quantized GGUF Flux models that you can find here city96/FLUX.1-dev-gguf and city96/FLUX.1-schnell-gguf, and some of the Flux models found on CivitAI that only contain the Unet part.

Since these are missing clip, t5 and the vae, you need to download these:

¹ t5-v1_1-xxl-encoder-Q3_K_S.gguf is the smallest and the one used as default. You can change any of these by editing settings\settings.json

Example:

  "gguf_clip1": "flux_clip_l.safetensors",
  "gguf_clip2": "t5-v1_1-xxl-encoder-Q6_K.gguf",
  "gguf_vae": "ae.safetensors"

(Make sure you don't misplace the commas at the end of the rows.)

You should now be able to use GGUF models and Flux safetensors that are missing clip, t5 and vae.

Some models that should work:

There are also models that contain everything and will work out-of-the-box:

Clone this wiki locally