Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'modeling_mamba' #121

Open
yanhuixie opened this issue Sep 20, 2024 · 1 comment
Open

ModuleNotFoundError: No module named 'modeling_mamba' #121

yanhuixie opened this issue Sep 20, 2024 · 1 comment

Comments

@yanhuixie
Copy link

All steps are based on these docs.
https://ryzenai.docs.amd.com/en/latest/inst.html
https://ryzenai.docs.amd.com/en/latest/llm_flow.html
https://github.com/amd/RyzenAI-SW/blob/main/example/transformers/models/llm/docs/README.md
The previous steps did not encounter any errors until the
python run_stmoothquantit.py -- model_namellama-2-7b chat -- task quantize
encountered this error.
After a simple investigation, I found that mamba_ssm is a tool specifically designed for nVidia's GPU and CUDA. Why is it being used here?
Any suggestion is appreciated.

(ryzenai-transformers) PS D:\path\to\RyzenAI\transformers\models\llm> python run_smoothquant.py --model_name llama-2-7b-chat --task quantize
Traceback (most recent call last):
File "D:\path\to\RyzenAI\transformers\models\llm\run_smoothquant.py", line 12, in
import llm_eval
File "D:\path\to\RyzenAI\transformers\tools\llm_eval.py", line 41, in
from modeling_mamba import MambaForCausalLM
ModuleNotFoundError: No module named 'modeling_mamba'

@shivani-athavale
Copy link

Hi @yanhuixie,

I tried to reproduce this on my end, but I do not get this error. I am able to import modeling_mamba with transformers version 4.39.

Perhaps you can try to follow the instructions at https://github.com/amd/RyzenAI-SW/tree/main/example/transformers/models/llm again, maybe some step was missed?

The modeling_mamba.py file is located at this path https://github.com/amd/RyzenAI-SW/tree/main/example/transformers/models/llm/mamba. After following the installation steps, you can check that this path to the file should be added to the PYTHONPATH in your environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants