Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #659
llama-cpp-rs-check.yml
on: pull_request
Run Tests on LLama Cpp Rs
2m 49s
Check that it builds on mac
1m 10s
Check that it builds on windows
4m 26s
Matrix: Check that it builds on various targets
Annotations
5 errors and 2 warnings
Check that it builds on various targets (linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c cargo build --bin simple --features cuda" did not complete successfully: exit code: 101
|
Check that it builds on various targets (linux/arm64)
The job was canceled because "linux_amd64" failed.
|
Check that it builds on various targets (linux/arm64)
The operation was canceled.
|
Run Tests on LLama Cpp Rs
Process completed with exit code 101.
|
Check that it builds on windows
Process completed with exit code 1.
|
Check that it builds on various targets (linux/amd64)
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Run Tests on LLama Cpp Rs
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
utilityai~llama-cpp-rs~V0ZPA8.dockerbuild
|
45.3 KB |
|