Skip to content

Actions: gtygo/llama.cpp

flake8 Lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8 workflow runs
8 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

cuda : optimize argmax (#10441)
flake8 Lint #8: Commit a5e4759 pushed by gtygo
November 22, 2024 04:16 19s master
November 22, 2024 04:16 19s
delete unused white space
flake8 Lint #7: Commit 804ddd7 pushed by gtygo
August 10, 2024 09:03 23s master
August 10, 2024 09:03 23s
Reuse querybatch to reduce frequent memory allocation
flake8 Lint #6: Commit 88105b7 pushed by gtygo
August 9, 2024 17:44 53m 26s master
August 9, 2024 17:44 53m 26s
retrieval
flake8 Lint #5: Commit fe6dc61 pushed by gtygo
August 9, 2024 17:12 28m 7s master
August 9, 2024 17:12 28m 7s
llama : add support for lora adapters in T5 model (#8938)
flake8 Lint #4: Commit 6afd1a9 pushed by gtygo
August 9, 2024 17:07 23s master
August 9, 2024 17:07 23s
make : fix llava obj file race (#8946)
flake8 Lint #3: Commit 272e3bd pushed by gtygo
August 9, 2024 16:11 22s master
August 9, 2024 16:11 22s
sync : ggml
flake8 Lint #2: Commit 4305b57 pushed by gtygo
August 9, 2024 08:44 21s master
August 9, 2024 08:44 21s
llama-bench : add support for getting cpu info on Windows (#8824)
flake8 Lint #1: Commit 506122d pushed by gtygo
August 7, 2024 05:48 13m 4s master
August 7, 2024 05:48 13m 4s