Skip to content

Flux ‐ GGUF and unet safetensors

yownas edited this page Sep 11, 2024 · 8 revisions

Ruined Fooocus has support for quantized GGUF Flux models that you can find here city96/FLUX.1-dev-gguf and city96/FLUX.1-schnell-gguf, and some of the Flux models found on CivitAI that only contain the Unet part.

Since these are missing clip, t5 and the vae, you need to download these:

¹ t5-v1_1-xxl-encoder-Q3_K_S.gguf is the smallest and the one used as default. You can change any of these by editing settings\settings.json

Example:

  "gguf_clip1": "flux_clip_l.safetensors",
  "gguf_clip2": "t5-v1_1-xxl-encoder-Q6_K.gguf",
  "gguf_vae": "ae.safetensors"

(Make sure you don't misplace the commas at the end of the rows.)

RuinedFooocus can automatically download some files. The list of known files are:

For gguf_clip1:

  • clip_l.safetensors

For gguf_clip2:

  • t5-v1_1-xxl-encoder-Q3_K_L.gguf
  • t5-v1_1-xxl-encoder-Q3_K_M.gguf
  • t5-v1_1-xxl-encoder-Q3_K_S.gguf
  • t5-v1_1-xxl-encoder-Q4_K_M.gguf
  • t5-v1_1-xxl-encoder-Q4_K_S.gguf
  • t5-v1_1-xxl-encoder-Q5_K_M.gguf
  • t5-v1_1-xxl-encoder-Q5_K_S.gguf
  • t5-v1_1-xxl-encoder-Q6_K.gguf ' t5-v1_1-xxl-encoder-Q8_0.gguf
  • t5-v1_1-xxl-encoder-f16.gguf
  • t5-v1_1-xxl-encoder-f32.gguf

For gguf_vae:

  • ae.safetensors

You should now be able to use GGUF models and Flux safetensors that are missing clip, t5 and vae.

Some models that should work:

There are also models that contain everything and will work out-of-the-box:

Clone this wiki locally