Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🤗 PEFT x 🧨 diffusers -- integration alert 🔥 #5489

Closed
sayakpaul opened this issue Oct 23, 2023 · 31 comments
Closed

🤗 PEFT x 🧨 diffusers -- integration alert 🔥 #5489

sayakpaul opened this issue Oct 23, 2023 · 31 comments
Labels
stale Issues that haven't received updates

Comments

@sayakpaul
Copy link
Member

Dear community members,

Over the past few weeks, @younesbelkada and @pacman100 helped us integrate peft into diffusers. Taking advantage of this integration, users can easily perform multi-adapter inference with control over scale, switching between different adapters, weighted adapter inference, etc.

Below is an example of how you can combine multiple adapters:

from diffusers import DiffusionPipeline
import torch

# Load SDXL.
pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda")

# Load LoRAs.
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")

# Combine them.
pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0])

# Perform inference.
# Notice how the prompt is constructed.
prompt = "toy_face of a hacker with a hoodie, pixel art"
image = pipe(
    prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0)
).images[0]
image

image

Know more about this in the doc here:
https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference

Be sure to install peft and diffusers from main to take advantage of this feature.

@sayakpaul sayakpaul pinned this issue Oct 23, 2023
@idlebg
Copy link

idlebg commented Oct 23, 2023

amazing ❤️
Love the direction we're headed.

In sync with this, I've been preparing an update for the FFusion/400GB-LoraXL repo(around a 100).
Also got some fresh heavily trained new LORAs brewing up ,as styles (around 20 of them). bulk inference with those will be interesting.
Was just In the process of

Initially planned to upload the additional 100 LORAs today. However, these will be temporarily postponed to ensure their compatibility with pefts way :)
conducting tests on the FFusion/400GB-LoraXL repository.

Upcoming Tests: I will also test if dynamic weights work correctly together.
also will change all All filenames from FF100 to 176 with be shortened by the by the number to make this:

pipe.load_lora_weights("FFusion/400GB-LoraXL", weight_name="FF.98.sdxlYamersRealism_version2.lora.safetensors", adapter_name="ff98")
pipe.load_lora_weights("FFusion/400GB-LoraXL", weight_name="FF.85.samaritan3dCartoon_v40SDXL.lora.safetensors", adapter_name="ff85")

way easier when change to just ff.101 only 🤟 🥃

ps: will start moving every non Lycorois from our civit to the hub for testing

@PizBernina
Copy link

Amazing!
Curious to know what the influence would be of swapping in the prompt toy_face to just face? I'd assume it'd be a toy anyways?
Further, also curious to know why toy is in the prompt but pixel not?
Cheers!

@sayakpaul
Copy link
Member Author

Curious to know what the influence would be of swapping in the prompt toy_face to just face? I'd assume it'd be a toy anyways?

toy_face is there because it's the trigger word of the underlying LoRA checkpoint. Please refer to the documentation for more details.

Further, also curious to know why toy is in the prompt but pixel not?

The following prompt was used in the example above:

prompt = "toy_face of a hacker with a hoodie, pixel art"

Both toy_face and pixel art are trigger words for the underlying LoRA checkpoints. Again, please refer to the documentation to know more.

@sayakpaul
Copy link
Member Author

Cc: @pdoane since you took part in some of the earlier design discussions around multi-adapter support.

Cc @isidentical @takuma104 as well.

@AnyISalIn
Copy link
Contributor

AnyISalIn commented Oct 24, 2023

Hi, thank you for introducing this feature. However, I encountered some issues while trying to enable enable_xformers_memory_efficient_attention.

   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

TypeError: Linear.forward() got an unexpected keyword argument 'scale'

this is my code.

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("./models/checkpoint/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
res = pipe(prompt="1girl", num_inference_steps=20)

@sayakpaul
Copy link
Member Author

Could you please post a fully reproducible snippet preferably with a Colab Notebook? It would be also helpful for us if you opened a new issue for this.

Cc: @younesbelkada

@AnyISalIn
Copy link
Contributor

I think maybe can add this code to fix this issue.

class XFormersAttnProcessor:
        # ...
        args = () if USE_PEFT_BACKEND else (scale,)
       # ...
        query = attn.to_q(hidden_states, *args)

@AnyISalIn
Copy link
Contributor

There are additional problems too. It seems that the process of unload_lora_weights is taking a long time, approximately 2 seconds.

@sayakpaul
Copy link
Member Author

Please create separate issues to help us track these better :-)

@younesbelkada
Copy link
Contributor

younesbelkada commented Oct 24, 2023

Hi @AnyISalIn !
Your fix makes sense ! Would you mind opening a separate ticket for that and tag myself and @sayakpaul ? thanks!

@AnyISalIn
Copy link
Contributor

Hi @AnyISalIn ! Your fix makes sense ! Would you mind opening a separate ticket for that and tag myself and @sayakpaul ? thanks!

Thank you. I have created an issue and a PR for the issue. #5504 #5506

@younesbelkada
Copy link
Contributor

Thanks for the fix @AnyISalIn !

@tin2tin
Copy link

tin2tin commented Oct 25, 2023

Thank you. I've implemented a multi LoRA selector UI in the free Pallaidium Blender add-on:
image

BTW. if it is only supposed to work with SD XL, maybe that should be noted in the docs?

@sayakpaul
Copy link
Member Author

Very nice!

BTW. if it is only supposed to work with SD XL, maybe that should be noted in the docs?

The API is generic. It should work for SD too. If it does not, please open an issue and tag me and @younesbelkada :-)

@tin2tin
Copy link

tin2tin commented Oct 25, 2023

@sayakpaul Rechecking, when loading a 1.5 LoRA on SD 1.5 this is actually the same error I'm getting, as reported here: #5522

@tin2tin
Copy link

tin2tin commented Oct 25, 2023

Here I'm loading both a Ted Lasso and a Willem Dafoe LoRA and even though I write the prompt as two different persons, they get merged into a man 50/50 of those two. Is this to be expected?
image

@sayakpaul
Copy link
Member Author

Separate issues please 😅

@tin2tin
Copy link

tin2tin commented Oct 25, 2023

Not really an issue, just a question of the intended behavior. No worries. :-) This behavior is also useful for ex. merging characters into a consistent looking character.

@sayakpaul
Copy link
Member Author

Maybe pay heed to how you're using the trigger words and the scales during the merging process.

We talk about it at length in the guide: https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference

@tin2tin
Copy link

tin2tin commented Oct 26, 2023

Are the LoRAs supposed to work on the SD XL refiner?

@sayakpaul
Copy link
Member Author

They weren't trained on the latent space of the Refiner. So, I don't have any reason to believe that they will work.

@linnanwang
Copy link

Thanks so much for this awesome work, this finally makes multi-lora works for Diffusrs!

@Captain272
Copy link

how could I resolve this.
ValueError: PEFT backend is required for set_adapters().

@younesbelkada
Copy link
Contributor

@Captain272 you can just do pip install -U peft

@AustinKimDev
Copy link

@Captain272넌 그냥 할 수 있어pip install -U peft

it's not working

already i have some problems

that's stacktrace

File ~/anaconda3/envs/image_310/lib/python3.10/site-packages/diffusers/loaders.py:2527, in LoraLoaderMixin.get_active_adapters(self)
   2511 """
   2512 Gets the list of the current active adapters.
   2513 
   (...)
   2524 ```
   2525 """
   2526 if not USE_PEFT_BACKEND:
-> 2527     raise ValueError(
   2528         "PEFT backend is required for this method. Please install the latest version of PEFT `pip install -U peft`"
   2529     )
   2531 from peft.tuners.tuners_utils import BaseTunerLayer
   2533 active_adapters = []

ValueError: PEFT backend is required for this method. Please install the latest version of PEFT `pip install -U peft`

feat. my pip install -U peft log

Requirement already satisfied: peft in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (0.6.0)
Collecting peft
  Using cached peft-0.6.1-py3-none-any.whl.metadata (23 kB)
Requirement already satisfied: numpy>=1.17 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (1.24.1)
Requirement already satisfied: packaging>=20.0 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (23.2)
Requirement already satisfied: psutil in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (5.9.0)
Requirement already satisfied: pyyaml in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (6.0.1)
Requirement already satisfied: torch>=1.13.0 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (2.1.0+cu121)
Requirement already satisfied: transformers in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (4.31.0)
Requirement already satisfied: tqdm in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (4.66.1)
Requirement already satisfied: accelerate>=0.21.0 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (0.21.0)
Requirement already satisfied: safetensors in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from peft) (0.4.0)
Requirement already satisfied: filelock in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (3.13.1)
Requirement already satisfied: typing-extensions in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (4.8.0)
Requirement already satisfied: sympy in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (1.12)
Requirement already satisfied: networkx in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (3.2.1)
Requirement already satisfied: jinja2 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (3.1.2)
Requirement already satisfied: fsspec in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (2023.10.0)
Requirement already satisfied: triton==2.1.0 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from torch>=1.13.0->peft) (2.1.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from transformers->peft) (0.16.4)
Requirement already satisfied: regex!=2019.12.17 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from transformers->peft) (2023.10.3)
Requirement already satisfied: requests in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from transformers->peft) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from transformers->peft) (0.13.3)
Requirement already satisfied: MarkupSafe>=2.0 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from jinja2->torch>=1.13.0->peft) (2.1.3)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from requests->transformers->peft) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from requests->transformers->peft) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from requests->transformers->peft) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from requests->transformers->peft) (2022.12.7)
Requirement already satisfied: mpmath>=0.19 in /home/apitots/anaconda3/envs/image_310/lib/python3.10/site-packages (from sympy->torch>=1.13.0->peft) (1.3.0)
Using cached peft-0.6.1-py3-none-any.whl (135 kB)
Installing collected packages: peft
  Attempting uninstall: peft
    Found existing installation: peft 0.6.0
    Uninstalling peft-0.6.0:
      Successfully uninstalled peft-0.6.0
Successfully installed peft-0.6.1

@younesbelkada
Copy link
Contributor

Hi @AustinKimDev
I suspect you don't have the correct transformers version as well, can you try:

pip install -U peft transformers

@AustinKimDev
Copy link

Hi @AustinKimDev I suspect you don't have the correct transformers version as well, can you try:

pip install -U peft transformers

I tried this and it worked successfully! thanks!

@younesbelkada
Copy link
Contributor

Thank you @AustinKimDev !

Copy link

github-actions bot commented Dec 8, 2023

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Dec 8, 2023
@Marilaurel
Copy link

Marilaurel commented Feb 8, 2024

I use free google colab without sd ui lol. I have 3 parts of code there and it works fine for the first time. Here it is (I'm not a tech person so excuse my messy code please. I've been into python for only 3 days and assembled this code using youtube and diffusers docs):

#1
!pip install -q diffusers transformers accelerate opencv-python
!pip install -U peft transformers

#2 
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepScheduler

pipe = StableDiffusionXLPipeline.from_pretrained("stablediffusionapi/albedobase-xl", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.safety_checker = None
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

#3
pipe.load_lora_weights("marinalaurel/vintage_street", weight_name="Vintage_Street_Photo.safetensors", adapter_name="street")
pipe.load_lora_weights("marinalaurel/Polyhedron_LightingSDXL_Chiaroscuro", weight_name="polyhedron_chiaroscuro-000005.safetensors", adapter_name="light")
pipe.set_adapters(["street", "light"], adapter_weights=[0.9, 0.9])

prompt = "photo of a woman sitting in a dimly lit cafe, full body, far from camera, shallow depth of field, unfocused, expired film, polyhedron_chiaroscuro-000005, Vintage_Street_Photo, "
h=640
w=800
steps=25
guidance=7.5
lora_weight=0
num_images=3
neg = "angles, angular, prickly, water, splashes, small details, stripes, tiling, repeating pattern, anatomical mirage, digital painting strokes, digital textures, bad quality, bad anatomy, worst quality, low quality, low resolutions, extra fingers, blur, blurry, ugly, wrongs proportions, watermark, image artifacts, lowres, ugly, jpeg artifacts, deformed, noisy image"

images = pipe(prompt, num_images_per_prompt=num_images, cross_attention_kwargs={"scale": lora_weight}, height=h, width=w, num_inference_steps=steps, guidance_scale=guidance, negative_prompt=neg).images
for i in range(num_images):
  display(images[i])

So, when I run it for the first time it works great. But when I change a prompt and run a 3rd part of the code (as do when I use only one lora) it gives me this error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-4-2560c1cf4f8a>](https://localhost:8080/#) in <cell line: 1>()
----> 1 pipe.load_lora_weights("marinalaurel/vintage_street", weight_name="Vintage_Street_Photo.safetensors", adapter_name="street")
      2 pipe.load_lora_weights("marinalaurel/Polyhedron_LightingSDXL_Chiaroscuro", weight_name="polyhedron_chiaroscuro-000005.safetensors", adapter_name="light")
      3 pipe.set_adapters(["street", "light"], adapter_weights=[0.9, 0.9])
      4 
      5 prompt = "photo of a woman sitting in a dimly lit cafe, full body, far from camera, shallow depth of field, unfocused, expired film, polyhedron_chiaroscuro-000005, Vintage_Street_Photo, "

1 frames
[/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora.py](https://localhost:8080/#) in load_lora_into_unet(cls, state_dict, network_alphas, unet, low_cpu_mem_usage, adapter_name, _pipeline)
    432 
    433             if adapter_name in getattr(unet, "peft_config", {}):
--> 434                 raise ValueError(
    435                     f"Adapter name {adapter_name} already in use in the Unet - please select a new adapter name."
    436                 )

ValueError: Adapter name street already in use in the Unet - please select a new adapter name.

Any ideas on how to fix that?

UPD. Seems like literally changing adapter names helps, but this time colab just says it's out of memory. But I believe this could work. The only thing is it's a crooked way to do this.

UPD 2. Or... it doesn't even see the loras. Tried them separately, then without loras and compared results. Hmmm. Have to figure it out...

@yiyixuxu yiyixuxu unpinned this issue Apr 2, 2024
@bigmover
Copy link

How do PEFT versions correspond to PyTorch versions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests