-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vulkan build failed #454
Comments
llama-cpp-rs/llama-cpp-sys-2/Cargo.toml Line 10 in 8c1430d
missing |
a PR would be welcome! Vulkan is supported almost entirely by not me. If you want to ensure it stays not broken as llama-cpp updates, feel free to add a github workflow to test it. |
Why the project uses When compiling llama.cpp with vulkan there's no need to compile manually the shaders. all is needed is just to enable the flag |
there was an attempt which was decided against in #221. If I recall, I could not get static linking working which made building docker images (which is how we deployed this) tricky. If someone is willing to bring that home, I'm fine accepting a PR. We (Dial AI) do not currently use this in our inference solution, so it's no longer a hard requirement. |
I'll try to create new PR with cmake. |
Cuda on Linux x86 is the only hard requirement (I currently test this manually as it's impossibly slow on the CI runners + #398) There's a best effort to maintain CPU on Linux, everything else I'm unable to test outside of CI, so if you can get something clean working CI-wise, that's great! Static linking I can go without, but I would prefer if its supported on the llama.cpp side of things. |
gen_vulkan_shaders failed at
error happend at
this output error:
dir ./llama.cpp\ggml\src\vulkan-shaders not exists
dir ./llama.cpp\ggml\src\ggml-cuda exists
The text was updated successfully, but these errors were encountered: