Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. #60

Open
Y-PanC opened this issue Sep 30, 2024 · 1 comment

Comments

@Y-PanC
Copy link

Y-PanC commented Sep 30, 2024

您好!
我下载该模型搭配LLamafactory框架,在做api部署的时候,报以下错误
[2024-10-01 00:15:35,483] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[INFO|configuration_utils.py:670] 2024-10-01 00:15:38,538 >> loading configuration file /mnt/ssd2/models/deepseek-vl-7b-chat/config.json
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 725, in getitem
raise KeyError(key)
KeyError: 'multi_modality'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/cli.py", line 79, in main
run_api()
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/api/app.py", line 129, in run_api
chat_model = ChatModel()
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 52, in init
self.engine: "BaseEngine" = HuggingfaceEngine(model_args, data_args, finetuning_args, generating_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 54, in init
tokenizer_module = load_tokenizer(model_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/model/loader.py", line 69, in load_tokenizer
config = load_config(model_args)
File "/home/ubuntu/pqj/math/LLaMA-Factory/src/llamafactory/model/loader.py", line 122, in load_config
return AutoConfig.from_pretrained(model_args.model_name_or_path, **init_kwargs)
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1025, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
请问怎么修改,我的transformer=4.45.0,具体环境附上图。
环境图

@LIMYOONA8
Copy link

请问你怎么解决的?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants