You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using RisuAI 136.0.1 as the frontend and Kobold.cpp 1.76 as the backend with a Yi 34B based model, text generation will cut off after around 400-500 tokens. Terminal shows the error message: "Token streaming was interrupted or aborted! [Errno 32] Broken pipe". Streaming is enabled in Risu, max response tokens is set to 4096, and API selected is OpenAI-compatible. Kobold.cpp is started with the command line "./koboldcpp-linux-x64-nocuda --usevulkan --gpulayers 24 --threads 7 --contextsize 8192".
I made a bug report on Kobold's github, but the dev said he thinks Risu is closing the connection.
Running up-to-date openSuse Tumbleweed, i7-12700KF cpu with 32GB DDR4 RAM and AMD RX 6800 card.
The text was updated successfully, but these errors were encountered:
I'm still experiencing this error in RisuAI 138.1.3 + Kobold.cpp 1.77 with a Gemma 2 27B-based model. Text generation cuts off after around 400 tokens with: "Token streaming was interrupted or aborted! [Errno 32] Broken pipe".
When using RisuAI 136.0.1 as the frontend and Kobold.cpp 1.76 as the backend with a Yi 34B based model, text generation will cut off after around 400-500 tokens. Terminal shows the error message: "Token streaming was interrupted or aborted! [Errno 32] Broken pipe". Streaming is enabled in Risu, max response tokens is set to 4096, and API selected is OpenAI-compatible. Kobold.cpp is started with the command line "./koboldcpp-linux-x64-nocuda --usevulkan --gpulayers 24 --threads 7 --contextsize 8192".
I made a bug report on Kobold's github, but the dev said he thinks Risu is closing the connection.
Running up-to-date openSuse Tumbleweed, i7-12700KF cpu with 32GB DDR4 RAM and AMD RX 6800 card.
The text was updated successfully, but these errors were encountered: