Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(modell_server): deprecated make arguments for llamacpp server #704

Merged
merged 4 commits into from
Jul 30, 2024

Conversation

axel7083
Copy link
Contributor

@axel7083 axel7083 commented Jul 29, 2024

@lmilbaum
Copy link
Collaborator

Would it make more sense to update the Renovate PR #703 instead?

@axel7083
Copy link
Contributor Author

axel7083 commented Jul 29, 2024

Would it make more sense to update the Renovate PR #703 instead?

Hi @lmilbaum there are two elements in this PR,

  • First the fix for the containerfile as mention in the description The LLAMA_CUBLAS argument is deprecated,
  • Second the bump of libraries

I did not see #703 when I opened this PR this morning, thanks for informing me. I will be removing the bump of library to keep the focus on the fix.

@axel7083 axel7083 changed the title fix(modell_server): deprecated lib and bump pip packages fix(modell_server): deprecated lib Jul 29, 2024
@axel7083 axel7083 changed the title fix(modell_server): deprecated lib fix(modell_server): deprecated make arguments for llamacpp server Jul 29, 2024
@Gregory-Pereira
Copy link
Collaborator

Shouldn't this be:

CMAKE_ARGS="-DGGML_CUDA=on"

according to the docs. Guess im confused why we are turning it off when this variant is specifically cuda.

Copy link
Member

@rhatdan rhatdan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@rhatdan rhatdan merged commit 7409fdc into containers:main Jul 30, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants