Skip to content

KoboldCPP-v1.54.yr0-ROCm

Compare
Choose a tag to compare
@github-actions github-actions released this 02 Jan 03:59
· 3477 commits to main since this release
2e41f66

koboldcpp-1.54-ROCm

Merge with @LostRuins latest upstream update

welcome to 2024 edition

  • Added logit_bias support (for both OpenAI and Kobold APIs. Accepts a dictionary of key-value pairs, which indicate the token IDs (int) and logit bias (float) to apply for that token. Object format is the same as and compatible with the official OpenAI implementation, though token IDs are model specific. (thanks @DebuggingLife46)
  • Updated Lite, added support for custom background images (thanks @Ar57m), and added customizable settings for stepcount and cfgscale for Horde/A1111 image generation.
  • Added mouseover tooltips for all labels in the GUI launcher.
  • Cleaned up and simplified the UI of the quick launch tab in the GUI launcher, some advanced options moved to other tabs.
  • Bug fixes for garbled output in Termux with q5k Phi
  • Fixed paged memory fallback when pinned memory alloc fails while not using mmap.
  • Attempt to fix on-exit segfault on some Linux systems.
  • Updated KAI United class.py, added new parameters.
  • Makefile fix for Linux CI build using conda (thanks @henk717)
  • Merged new improvements and fixes from upstream llama.cpp (includes VMM pool support)
  • Included prebuilt binary for no-cuda Linux as well.
  • Various minor fixes.

To use on Windows, download and run the koboldcpp_rocm.exe, which is a one-file pyinstaller OR download koboldcpp_rocm_files.zip and run python koboldcpp.py
If you're using NVIDIA, you can try koboldcpp.exe at LostRuin's upstream repo here
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller, also at LostRuin's repo.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4