Skip to content

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #659

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers

Add sampling API back to LlamaTokenDataArray; Add DRY and XTC Samplers #659

Re-run triggered December 8, 2024 16:09
Status Failure
Total duration 4m 40s
Artifacts 1

llama-cpp-rs-check.yml

on: pull_request
Run Tests on LLama Cpp Rs
2m 49s
Run Tests on LLama Cpp Rs
Check that it builds on mac
1m 10s
Check that it builds on mac
Check that it builds on windows
4m 26s
Check that it builds on windows
Matrix: Check that it builds on various targets
Fit to window
Zoom out
Zoom in

Annotations

5 errors and 2 warnings
Check that it builds on various targets (linux/amd64)
buildx failed with: ERROR: failed to solve: process "/bin/sh -c cargo build --bin simple --features cuda" did not complete successfully: exit code: 101
Check that it builds on various targets (linux/arm64)
The job was canceled because "linux_amd64" failed.
Check that it builds on various targets (linux/arm64)
The operation was canceled.
Run Tests on LLama Cpp Rs
Process completed with exit code 101.
Check that it builds on windows
Process completed with exit code 1.
Check that it builds on various targets (linux/amd64)
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
Run Tests on LLama Cpp Rs
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
utilityai~llama-cpp-rs~V0ZPA8.dockerbuild
45.3 KB