-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support PEFT models in pipelines #29517
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
3df8ecc
to
cbf796b
Compare
0d10267
to
0cf09f8
Compare
Hi @B-Step62, this branch should fix the issue now! Try If you don't find any further problems, I'll ping for core maintainer review and merge this soon. |
Thanks for the quick fix, @Rocketknight1! I have tested my examples and confirmed the error message no longer is printed:) |
cc @amyeroberts for core maintainer review in that case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this support!
Change looks good - just a request to abstract out the peft model check
@@ -1038,7 +1038,12 @@ def check_model_type(self, supported_models: Union[List[str], dict]): | |||
else: | |||
supported_models_names.append(model.__name__) | |||
supported_models = supported_models_names | |||
if self.model.__class__.__name__ not in supported_models: | |||
if "Peft" in self.model.__class__.__name__ and hasattr(self.model, "base_model"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, checking if Peft
is in the name seems like a flaky way to check this.
What I'd suggest is adding a more general is_peft_model
utility under modeling_utils.py
which is tested (and we check with @younesbelkada is correct) which can then be used everywhere. It should also check is_peft_available
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes you can do if the model is an isntance of Peftxxx
class, however we do support already peft model inference: https://huggingface.co/docs/transformers/peft , users that pass a Peftxxx
class can be considered as bad intent, as they only need to pass a valid path to an adapter model or first load a peft model using AutoModelForxxx
and pass that to the pipeline. See for example:
def test_peft_pipeline(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR !
IMO we shouldn't support that and rather educate the users properly on how to use PEFT + pipeline --> first load the peft adapter with AutoModelxxx
interface then pass that model to the pipeline or pass a valid path to a Peft model!
See my comment here: #29517 (comment)
Also note creating a brand new PEFT model with |
My bad! Your solution sounds good - I guess we can close this PR and just post something in the original issue at #29395 |
ok ok sounds good! sorry for being responsive late 😭 ok yes the proposed plan sounds good! |
Thank you for working on this issue @younesbelkada @Rocketknight1 !
Unfortunately, this solution doesn't work as it is in our case. We want to save/load the adapter (+ the base model and tokenizer) locally as pipeline, without needing to interaction with the hub, to avoid the flakiness of downloading heavy files over network. Let me ask a few follow-up questions with this regards:
|
Thanks @B-Step62 for the clarification ! |
Our pipelines check for supported classes but aren't PEFT-aware, so they get confused and throw errors for PEFT models. This PR changes the supported class check to use the actual underlying model of a PEFT model.
Fixes #29395