Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行第一个Demo的时候报错了 #31

Open
3244we opened this issue Oct 21, 2024 · 4 comments
Open

运行第一个Demo的时候报错了 #31

3244we opened this issue Oct 21, 2024 · 4 comments

Comments

@3244we
Copy link

3244we commented Oct 21, 2024

The seen_tokens attribute is deprecated and will be removed in v4.41. Use the cache_position model input instead.
Traceback (most recent call last):
File "/hpc2hdd/home/yhuang489/junhao/Emu3/emu3.py", line 70, in
outputs = model.generate(
File "/hpc2hdd/home/yhuang489/anaconda3/envs/emu/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/hpc2hdd/home/yhuang489/anaconda3/envs/emu/lib/python3.10/site-packages/transformers/generation/utils.py", line 2024, in generate
result = self._sample(
File "/hpc2hdd/home/yhuang489/anaconda3/envs/emu/lib/python3.10/site-packages/transformers/generation/utils.py", line 2992, in _sample
next_token_scores = logits_processor(input_ids, next_token_logits)
File "/hpc2hdd/home/yhuang489/anaconda3/envs/emu/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 98, in call
scores = processor(input_ids, scores)
File "/hpc2hdd/home/yhuang489/anaconda3/envs/emu/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 1339, in call
prefix_allowed_tokens = self._prefix_allowed_tokens_fn(batch_id, sent)
File "/hpc2hdd/home/yhuang489/junhao/Emu3/emu3/mllm/utils_emu3.py", line 50, in call
height = self.height[batch_id] if self.height.shape[0] > 1 else self.height[0]
IndexError: tuple index out of range

@3244we
Copy link
Author

3244we commented Oct 21, 2024

这里面的height=tensor(90),batch_id=0,然后将代码改为height = self.height,width = self.width后正常运行

@zhaixingang
Copy link

lol, thanks for your solution

@ryanzhangfan
Copy link
Collaborator

We have updated processing_emu3.py and utils_emu3.py to support batch inference recently but the demo code in README.md was not adapted to support batch inference interface. We have updated demo codes in README.md or you can try using image_generation.py or multimodal_understanding.py.

@ColorDavid
Copy link

我们最近更新了 processing_emu3.py 和 utils_emu3.py 以支持批量推理,但 README.md 中的演示代码未适应支持批量推理接口。我们已经更新了 README.md 中的演示代码,或者您可以尝试使用 image_generation.py 或 multimodal_understanding.py。

请问有没有支持in-context learning/few shots learning的推理接口?或者是支持多图推理的接口或者框架?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants