Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Model' object has no attribute 'n_kv_heads' - Using MLX #1225

Closed
lemig opened this issue Oct 24, 2024 · 4 comments · Fixed by #1260
Closed

AttributeError: 'Model' object has no attribute 'n_kv_heads' - Using MLX #1225

lemig opened this issue Oct 24, 2024 · 4 comments · Fixed by #1260
Labels

Comments

@lemig
Copy link

lemig commented Oct 24, 2024

Describe the issue as clearly as possible:

I am trying the mlx-lm examples from your documentation:
https://dottxt-ai.github.io/outlines/latest/reference/models/mlxlm/

I encounter 2 issues:

First issue:
TypeError: MLXLM.generate() got an unexpected keyword argument 'temperature'
I guess the syntax in documentation is no longer up to date.

Omitting the temperature argument, now I get:

Second issue:
AttributeError: 'Model' object has no attribute 'n_kv_heads'

That happens with mlx-community/Phi-3.5-mini-instruct-4bit. Tried another model (mlx-community/Meta-Llama-3.1-8B-Instruct-4bit), got same error.

Steps/code to reproduce the bug:

from outlines import models, generate
model = models.mlxlm("mlx-community/Phi-3.5-mini-instruct-4bit")
generator = generate.text(model)
answer = generator("A prompt", temperature=2.0) # first issue
answer = generator("A prompt") # second issue

Expected result:

no error

Error message:

(tedson) miguel@Miguels-MacBook-Air tedson % python
Python 3.11.10 (main, Oct  3 2024, 02:26:51) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from outlines import models, generate
>>> model = models.mlxlm("mlx-community/Phi-3.5-mini-instruct-4bit")
Fetching 11 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 102073.77it/s]
>>> generator = generate.text(model)
>>> answer = generator("A prompt", temperature=2.0) 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/outlines/generate/api.py", line 504, in __call__
    completions = self.model.generate(
                  ^^^^^^^^^^^^^^^^^^^^
TypeError: MLXLM.generate() got an unexpected keyword argument 'temperature'
>>> answer = generator("A prompt")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/outlines/generate/api.py", line 504, in __call__
    completions = self.model.generate(
                  ^^^^^^^^^^^^^^^^^^^^
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/outlines/models/mlxlm.py", line 41, in generate
    return "".join(list(streamer))
                   ^^^^^^^^^^^^^^
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/outlines/models/mlxlm.py", line 112, in stream
    for (token, prob), n in zip(
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/outlines/models/mlxlm.py", line 172, in generate_step
    if isinstance(self.model.n_kv_heads, int)
                  ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/miguel/miniforge3/envs/tedson/lib/python3.11/site-packages/mlx/nn/layers/base.py", line 103, in __getattr__
    super(Module, self).__getattribute__(key)
AttributeError: 'Model' object has no attribute 'n_kv_heads'

Outlines/Python version information:

Version information

``` 0.1.1 Python 3.11.10 (main, Oct 3 2024, 02:26:51) [Clang 14.0.6 ] accelerate==1.0.1 aiohappyeyeballs==2.4.3 aiohttp==3.10.10 aiosignal==1.3.1 airportsdata==20241001 annotated-types==0.7.0 anyio==4.6.2.post1 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.4.0 click==8.1.7 cloudpickle==3.1.0 datasets==3.0.1 dill==0.3.8 diskcache==5.6.3 fastapi==0.115.2 filelock==3.16.1 frozenlist==1.4.1 fsspec==2024.6.1 h11==0.14.0 huggingface-hub==0.26.0 idna==3.10 interegular==0.3.3 Jinja2==3.1.4 jsonschema==4.23.0 jsonschema-specifications==2024.10.1 lark==1.2.2 MarkupSafe==3.0.2 mlx==0.19.0 mlx-lm==0.19.2 mpmath==1.3.0 multidict==6.1.0 multiprocess==0.70.16 nest-asyncio==1.6.0 networkx==3.4.2 numpy==1.26.4 outlines==0.1.1 outlines_core==0.1.14 packaging==24.1 pandas==2.2.3 propcache==0.2.0 protobuf==5.28.3 psutil==6.1.0 pyarrow==17.0.0 pycountry==24.6.1 pydantic==2.9.2 pydantic_core==2.23.4 python-dateutil==2.9.0.post0 pytz==2024.2 PyYAML==6.0.2 referencing==0.35.1 regex==2024.9.11 requests==2.32.3 rpds-py==0.20.0 safetensors==0.4.5 sentencepiece==0.2.0 six==1.16.0 sniffio==1.3.1 starlette==0.40.0 sympy==1.13.1 tokenizers==0.20.1 torch==2.5.0 tqdm==4.66.5 transformers==4.45.2 typing_extensions==4.12.2 tzdata==2024.2 urllib3==2.2.3 uvicorn==0.32.0 xxhash==3.5.0 yarl==1.15.5 ```

Context for the issue:

Can't use mlx-ml outlines on my MacBook Air M2 24G. And transformer outlines performance is terrible on Apple Silicon.

@lemig lemig added the bug label Oct 24, 2024
@vacmar01
Copy link

vacmar01 commented Nov 2, 2024

Have the same problem with "mlx-community/Llama-3.2-3B-Instruct-8bit" with the following version numbers:

mlx==0.19.3
mlx-lm==0.19.2
outlines==0.1.1
outlines_core==0.1.14

@cmcmaster1
Copy link
Contributor

I have made a PR to fix this: #1260

@vacmar01
Copy link

Awesome, thank you! I hope it will be merged fast! :)

@scampion
Copy link
Contributor

I confirm, the PR fix it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants