-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🤗 PEFT x 🧨 diffusers -- integration alert 🔥 #5489
Comments
amazing ❤️ In sync with this, I've been preparing an update for the FFusion/400GB-LoraXL repo(around a 100). Initially planned to upload the additional 100 LORAs today. However, these will be temporarily postponed to ensure their compatibility with pefts way :) Upcoming Tests: I will also test if dynamic weights work correctly together.
way easier when change to just ff.101 only 🤟 🥃 ps: will start moving every non Lycorois from our civit to the hub for testing |
Amazing! |
The following prompt was used in the example above: prompt = "toy_face of a hacker with a hoodie, pixel art" Both |
Cc: @pdoane since you took part in some of the earlier design discussions around multi-adapter support. Cc @isidentical @takuma104 as well. |
Hi, thank you for introducing this feature. However, I encountered some issues while trying to enable enable_xformers_memory_efficient_attention.
this is my code. from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("./models/checkpoint/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
res = pipe(prompt="1girl", num_inference_steps=20) |
Could you please post a fully reproducible snippet preferably with a Colab Notebook? It would be also helpful for us if you opened a new issue for this. Cc: @younesbelkada |
I think maybe can add this code to fix this issue.
|
There are additional problems too. It seems that the process of |
Please create separate issues to help us track these better :-) |
Hi @AnyISalIn ! |
Thank you. I have created an issue and a PR for the issue. #5504 #5506 |
Thanks for the fix @AnyISalIn ! |
Very nice!
The API is generic. It should work for SD too. If it does not, please open an issue and tag me and @younesbelkada :-) |
@sayakpaul Rechecking, when loading a 1.5 LoRA on SD 1.5 this is actually the same error I'm getting, as reported here: #5522 |
Separate issues please 😅 |
Not really an issue, just a question of the intended behavior. No worries. :-) This behavior is also useful for ex. merging characters into a consistent looking character. |
Maybe pay heed to how you're using the trigger words and the scales during the merging process. We talk about it at length in the guide: https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference |
Are the LoRAs supposed to work on the SD XL refiner? |
They weren't trained on the latent space of the Refiner. So, I don't have any reason to believe that they will work. |
Thanks so much for this awesome work, this finally makes multi-lora works for Diffusrs! |
how could I resolve this. |
@Captain272 you can just do |
it's not working already i have some problems that's stacktrace
feat. my
|
Hi @AustinKimDev pip install -U peft transformers |
I tried this and it worked successfully! thanks! |
Thank you @AustinKimDev ! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I use free google colab without sd ui lol. I have 3 parts of code there and it works fine for the first time. Here it is (I'm not a tech person so excuse my messy code please. I've been into python for only 3 days and assembled this code using youtube and diffusers docs):
So, when I run it for the first time it works great. But when I change a prompt and run a 3rd part of the code (as do when I use only one lora) it gives me this error:
Any ideas on how to fix that? UPD. Seems like literally changing adapter names helps, but this time colab just says it's out of memory. But I believe this could work. The only thing is it's a crooked way to do this. UPD 2. Or... it doesn't even see the loras. Tried them separately, then without loras and compared results. Hmmm. Have to figure it out... |
How do PEFT versions correspond to PyTorch versions? |
Dear community members,
Over the past few weeks, @younesbelkada and @pacman100 helped us integrate
peft
intodiffusers
. Taking advantage of this integration, users can easily perform multi-adapter inference with control overscale
, switching between different adapters, weighted adapter inference, etc.Below is an example of how you can combine multiple adapters:
Know more about this in the doc here:
https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference
Be sure to install
peft
anddiffusers
frommain
to take advantage of this feature.The text was updated successfully, but these errors were encountered: