Skip to content

Releases: YellowRoseCx/koboldcpp-rocm

KoboldCPP-v1.67.yr0-ROCm

05 Jun 18:05
4aa091e
Compare
Choose a tag to compare

Ok, so there are 4 different EXE builds here, the one named "koboldcpp_rocm.exe" has been built for RX6000 and RX7000 series GPUs
The other 3 have been built for the following GPU targets: "gfx803;gfx900;gfx906;gfx1010;gfx1011;gfx1012;gfx1030;gfx1031;gfx1032;gfx1100;gfx1101;gfx1102"

The 3 of them have been built in slightly different ways as I do not yet know which offers best performance yet, but after some testing, if everything works out okay and it improves koboldcpp-rocm, I'll move it back to 1 or 2 exe files.

koboldcpp_rocm.exe: has been built using the AMD ROCm 5.7.1 provided "Tensile Libraries"/GPU code.

koboldcpp_rocm4allV1.exe: has been built using ROCm-4-All-5.7.1 Tensile Libraries and then added to the AMD ROCm folder with the other provided GPU code before compiling.

koboldcpp_rocm4allV2.exe: has been built by using the AMD ROCm 5.7.1 provided "Tensile Libraries"/GPU code for compiling but then adding only the ROCm-4-All-5.7.1 Tensile Libraries while generating the .exe.

koboldcpp_rocm4allV3.exe: has been built by deleting the entire stock AMD ROCm 5.7.1 GPU code folder and replacing it with only ROCm-4-All-5.7.1 Tensile Library files before compiling.

My gut says koboldcpp_rocm4allV3.exe will probably perform best of the 3 versions. If you have a RX6000 or RX7000 series gpu, I would compare koboldcpp_rocm.exe and koboldcpp_rocm4allV3.exe, there might be a noticeable speed difference.

koboldcpp_rocm4allV1.exe and koboldcpp_rocm4allV2.exe may change generation and processing performance, but I would stick with the original and V3 files as the first ones to try.

Sorry for the whole mess of different .EXEs but hopefully it brings improvement to KoboldCpp-ROCm for Windows!

ROCm-4-All-5.7.1 Tensile Libraries were obtained from https://github.com/brknsoul/ROCmLibs


The full Changelog for this version can be read at https://github.com/LostRuins/koboldcpp/releases/tag/v1.67
The biggest changes being the integration of Whisper.cpp into KoboldCpp and Quantized KV Cache

KoboldCPP-v1.66.1.yr1-ROCm

25 May 22:19
Compare
Choose a tag to compare

Windows-ROCm users, this build should hopefully fix any errors you were receiving the past few updates

KoboldCPP-v1.66.1.yr0-ROCm

25 May 02:41
Compare
Choose a tag to compare

https://github.com/LostRuins/koboldcpp/releases/tag/v1.66

Made FlashAttention on by default in Windows because it supposedly prevents the "access violation reading" error. Not sure if there are performance drawbacks, if so you can turn it off in the Hardware tab of the GUI

Full Changelog: v1.65.yr0-ROCm...v1.66.1.yr0-ROCm

KoboldCPP-v1.65.yr0-ROCm

16 May 18:54
Compare
Choose a tag to compare

koboldcpp-1.65

  • NEW: Added a new standalone UI for Image Generation, thanks to @ayunami2000 for porting StableUI (original by @aqualxx) to KoboldCpp! Now you have a powerful dedicated A1111 compatible GUI for generating images locally, with a similar look and feel to Automatic1111. And it runs in your browser, launching straight from KoboldCpp, simply load a Stable Diffusion model and visit http://localhost:5001/sdui/
  • Added a new API field bypass_eos to skip EOS tokens while still allowing them to be generated.
  • Hopefully fixed tk window resizing issues
  • Increased interrogate mode token amount by 30%, and increased default chat completions token amount by 250%
  • Merged improvements and fixes from upstream
  • Updated Kobold Lite:
    • Added option to insert Instruct System Prompt
    • Added option to bypass (skip) EOS
    • Added toggle to return special tokens
    • Added Chat Names insertion for instruct mode
    • Added button to launch StableUI
    • Various minor fixes, support importing cards from CharacterHub urls.

Important Deprecation Notice:

The flags --smartcontext, --hordeconfig and --sdconfig are being deprecated.

--smartcontext is no longer as useful nowadays with context shifting, and just adds clutter and confusion. With it's removal, if contextshift is enabled, smartcontext will be used as a fallback if contextshift is unavailable, such as with old models. --noshift can still be used to turn both behaviors off.

--hordeconfig and --sdconfig are being replaced, as the number of configurations for these arguments grow, the order of these positional arguments confuses people, and makes it very difficult to add new flags and toggles as well, since a misplaced new parameter breaks existing parameters. Additionally, it also prevented me from properly validating each input for data type and range.

As this is a large change, these deprecated flags will remain functional for now. However, you are strongly advised to switch over to the new replacement flags below:

Replacement Flags:

--hordemodelname  Sets your AI Horde display model name.
--hordeworkername Sets your AI Horde worker name.
--hordekey        Sets your AI Horde API key.
--hordemaxctx     Sets the maximum context length your worker will accept.
--hordegenlen     Sets the maximum number of tokens your worker will generate.

--sdmodel     Specify a stable diffusion model to enable image generation.
--sdthreads   Use a different number of threads for image generation if specified. 
--sdquant     If specified, loads the model quantized to save memory.
--sdclamped   If specified, limit generation steps and resolution settings for shared use.

To use on Windows, download and run the koboldcpp_rocm.exe, which is a one-file pyinstaller OR download koboldcpp_rocm_files.zip and run python koboldcpp.py (additional python pip modules might need installed, like customtkinter and tk or python-tk.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4 (-j4 can be adjusted to your number of CPU threads for faster build times)

For a full Linux build, make sure you have the OpenBLAS and CLBlast packages installed:
For Arch Linux: Install cblas openblas and clblast.
For Debian: Install libclblast-dev and libopenblas-dev.
then run make LLAMA_HIPBLAS=1 LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1 -j4

If you're using NVIDIA, you can try koboldcpp.exe at LostRuin's upstream repo here
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller, also at LostRuin's repo.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

KoboldCPP-v1.64.1.yr0-ROCm

08 May 19:44
Compare
Choose a tag to compare
Merge remote-tracking branch 'upstream/concedo'

KoboldCPP-v1.64.yr0-ROCm

02 May 02:37
Compare
Choose a tag to compare
Merge remote-tracking branch 'upstream/concedo'

KoboldCPP-v1.63.yr1-ROCm

24 Apr 02:19
f123ad3
Compare
Choose a tag to compare

TURN MMQ OFF

There was some big changes upstream, that's why it's taken a while to update kcpp-rocm, trying to get it to work.

This seems to work with MMQ DISABLED but I've also had reports of LLAMA3 not working with this version, but llama 3 8b instruct DID work for me

KoboldCPP-v1.61.2.yr1-ROCm

20 Mar 17:25
9c1707d
Compare
Choose a tag to compare
Pre-release
set pyinstaller version to 6.4.0

KoboldCPP-v1.61.2.yr0-ROCm

15 Mar 12:56
Compare
Choose a tag to compare

Release notes coming soon

KoboldCPP-v1.60.1.yr0-ROCm

06 Mar 22:39
Compare
Choose a tag to compare

Upstream Changelog:

KoboldCpp is just a 'Dirty Fork' edition 😩

image

  • KoboldCpp now natively supports Local Image Generation, thanks to the phenomenal work done by @leejet in stable-diffusion.cpp! It provides an A1111 compatible txt2img endpoint which you can use within the embedded Kobold Lite, or in many other compatible frontends such as SillyTavern.
    • Just select a compatible SD1.5 or SDXL .safetensors fp16 model to load, either through the GUI launcher or with --sdconfig
    • Enjoy zero install, portable, lightweight and hassle free image generation directly from KoboldCpp, without installing multi-GBs worth of ComfyUi, A1111, Fooocus or others.
    • With just 8GB VRAM GPU, you can run both a 7B q4 GGUF (lowvram) alongside any SD1.5 image model at the same time, as a single instance, fully offloaded. If you run out of VRAM, select Compress Weights (quant) to quantize the image model to take less memory.
    • KoboldCpp allows you to run in text-gen-only, image-gen-only or hybrid modes, simply set the appropriate launcher configs.
    • Known to not work correctly in Vulkan (for now).
  • When running from command line, --contextsize can now be set to any arbitrary number in range instead of locked to fixed values. However, using a non-recommended value may result in incoherent output depending on your settings. The GUI launcher for this remains unchanged.
  • Added new quant types, pulled and merged improvements and fixes from upstream.
  • Fixed some issues loading older GGUFv1 models, they should be working again.
  • Added cloudflare tunnel support for macOS, (via --remotetunnel, however it probably won't work on M1, only amd64).
  • Updated API docs and Colab for image gen.
  • Updated Kobold Lite:
    • Integrated support for AllTalk TTS
    • Added "Auto Jailbreak" for instruct mode, useful to wrangle stubborn or censored models.
    • Auto enable image gen button if KCPP loads image model
    • Improved Autoscroll and layout, defaults to SSE streaming mode
    • Added option to import and export story via clipboard
    • Added option to set personal notes/comments in story
  • Update v1.60.1: Port fix for CVE-2024-21836 for GGUFv1, enabled LCM sampler, allowed loading gguf SD models, fix SD for metal.

To use on Windows, download and run the koboldcpp_rocm.exe, which is a one-file pyinstaller OR download koboldcpp_rocm_files.zip and run python koboldcpp.py (additional python pip modules might need installed, like customtkinter and tk or python-tk.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4 (-j4 can be adjusted to your number of CPU threads for faster build times)

For a full Linux build, make sure you have the OpenBLAS and CLBlast packages installed:
For Arch Linux: Install cblas openblas and clblast.
For Debian: Install libclblast-dev and libopenblas-dev.
then run make LLAMA_HIPBLAS=1 LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1 -j4

If you're using NVIDIA, you can try koboldcpp.exe at LostRuin's upstream repo here
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller, also at LostRuin's repo.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.