-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation issues / clarifications [post here if you run into troubles] #17
Comments
Hiya mate. I'm having trouble getting the API to be applied to all the agents. |
@Nroobz hey - is it just assigning it to some agents and not all? What happens if you click the agent button under the client ? |
@final-wombat when I click the agent button i.e 'creator', a modal window opens with a drop-down labelled 'client' but theres no data available for the the dropdown. error message below: 2023-11-19 17:29:38 [info ] frontend connected |
@Nroobz can you open the client config and change the type from openai to textgenwebui - looks like the types got mixed up and its failing to save the client because of it Edit: or vice versa i guess, if you're trying to run openai client. Either way it seems the client config ended up in a bugged state. Once the client is setup correctly it should actually auto assign it self to all agents if its the only client. Then there is also the button in the client row that will assign it to all agents on click as well: Edit: Not sure how it ended up with type |
Seems like editing clients in generally seems sorta buggy atm, especially if you change the type on an existing client, created a bug ticket and will work on that. I can't even remove the edited client now. I had to shutdown the backend and frontend and edit the config.yaml file to fix it. Let me know if you want to try that / need help with that. |
Found all sorts of issues looking into this, version 0.13.2 should fix it - let me know if not. There also was a problem with openai clients specifically, that matched what you were seeing, so hopefully update will work for you. |
ok i'll get back to you asap with feedback |
problem starting backend |
@Antrisa Odd, try to run |
Note: This error originates from the build backend, and is likely not a problem with poetry but with safetensors (0.4.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "safetensors (==0.4.1)"'. • Installing scipy (1.11.4): Installing... ChefBuildError the installer also says maybe because pip is 23.2. not 23.3 What exactly are all the steps in detail to do before a user should press install.bat |
@Antrisa ideally you'd just have to install python and nodejs and install.bat does the rest. That's how i am testing on my end anyhow, but it could be i am missing some setup step somewhere, i might try spinning up a vm later to test a completely from scratch setup. You can activate the talemate venv and manually upgrade pip and setuptools and see if that helps. open a command window then run
if it still fails it'd also be helpful for you to run the following
|
@Antrisa able to reproduce, looks like it is indeed missing some setup steps, will update once i know more |
Ok so i found two issues, i am not sure the second one applies to your case, but
|
0.16.1 released for some fixes to the install script for windows installations |
Hello I am pretty new to this so this may be a dumb question. Error [ERR_REQUIRE_ESM]: require() of ES Module C:\Program Files\nodejs\node_modules\npm\node_modules\strip-ansi\index.js from C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js not supported. I tried restarting and reinstalling everything but I'm not sure how to fix it. |
Personally i had nodejs v19 installed, upgraded to v21 - since i had not tested with it yet - and it seems to work correctly (although it does give me a warning about some package not supporting it). Can you do the following please:
|
Thank you @vegu-ai-tools I downgraded, and it worked! |
@GabrielxScorpio Thanks for confirming that the downgrade fixed the issue, i've updated the readme. I've seen that error before, but i haven't managed to track it down yet. Although right now with LMStudio 0.2.12 loaded - using the same model - i can't reproduce it. In the LMStudio view there should be a server log ticking by, what does it show when you try to generate dialogue? I'd expect it to look something like this: If it shows it generating, how fast is it? Edit: also mind copy pasting the error message that appears in the backend process window for talemate, that'd help a lot tracking this down, whatever it is :) |
@GabrielxScorpio oh i think i see the issue It seems like the context size for the client isn't set to anything - can you try setting it either via the slider or via the client settings dialogue? in the end it should look like this with a number next to ctx Note sure how it ended up being unset, i created a bug ticket for that. |
@vegu-ai-tools This is my Lm Studio page. This is a closer look at the log. Every couple of seconds, it repeats that. This is what is showing up on the Talemate backend window every time the error appears. |
@GabrielxScorpio thanks for the followup, issue is definitely the unset context length - i can reproduce and am working on a fix for a 18.1 release, will update here once its ready. As a workaround, this seems to fix it until then (sort of, it will still revert back to whatever you set here if you try changing it):
|
@GabrielxScorpio fix released in https://github.com/vegu-ai/talemate/releases/tag/0.18.1 Hope this fixes the issue for you as well. Apologies for the rough start, personally i dont run against LM Studio so testing against it has been somewhat neglected. |
@vegu-ai-tools The update fixed the error! Thank you. I found another bug maybe? I had switched to using GPT4 turbo before your fix was released and it was not letting me continue with dialogue. But when I progress the story it answers back normally. |
Woah, ill have to look at that - that's gpt-4 censoring itself. Given that there is nothing in the chat history that is even close to risky, it must be tripping over the system message talemate sends. I test gpt-4 turbo a lot, so that's new if so. Thanks for pointing it out. |
Hi there, The same settings do work for Sillytavern, so I know that they are good and there's a model accessible
|
@thephilluk the client auto-appends the That's the only thing that jumps out at me. That said the OpenAI compat client is fairly fresh and has received very limited testing, personally i've only tested it against the llamacpp openai wrapper, so its probably still finnicky. Let me know if fixing the url doesn't do anything. Did some tests and found additional issues: tracked #76 |
@vegu-ai-tools tried that, got the following (changed) output:
|
@thephilluk i went ahead and checked against their api and eventually got it working - however i think there is definitely something broken / fragile with the client, which i will look into as part of #76, but after saving and retrying a couple times it ended up working. (see my client config below) - that said, i never got to see the error you are seeing, so unsure what that's all about still. Bad news - from the little i testing i did with it just now, i am not sure their mythomax instance is on par with what is required to handle the more demanding parts (specifically world state stuff, which requires a lot of JSON accuracy tasks) in talemate. Seems like the experience will be suboptimal, alas. |
@vegu-ai-tools Got it, thank you! |
@thephilluk good question :) Personally i run local / rented gpu / official openai I have not had the time to test with any other remote apis so can't make any direct recommendations, but anything hosting a 7B mistral or upwards should be capable of handling talemate, keeping in mind that until #76 is fixed, the issues we just ran into with chub.ai for setup may still exist on those apis as well. Here is a list of models i currently test with (ranging 7B to 50B), that are all good
Talemate does come with direct runpod support if you want to try the gpu rental route, but there is some setup involved, instructions here if you're interested: https://github.com/vegu-ai/talemate/blob/main/docs/runpod.md |
Getting an error on the install script in Linux. It fails to install triton. source install.sh [installs other dependencies] `
RuntimeError Hash for triton (2.2.0) from archive triton-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl not found in known hashes (was: sha256:63f9fbd31ba01fab81a80334d963e28da8e2f5d4ba532b71c62246d7ba5ddf12) at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/executor.py:808 in _validate_archive_hash Cannot install triton. I tried removing the lock file, deleting the poetry cache, re-installing poetry all to no avail so far. Also, thanks for your hard work! |
@maxcurrent420 looks like there are multiple things going on here
first issue appears to be its never actually creating the virtual env. can you try editing the file and changing i think your case poetry just rolls a venv on its own anyways, so unsure this would do anything for your error, but still worth trying to see what happens if the first error is resolved, since it actually does a fresh poetry install into that venv. I am not able to reproduce - this does scream poetry cache issue to me, but sounds like you already went down that path, so not sure. one thing you could try is switching to the Edit: did a bunch of edits for clarity |
I did change it to python3 but what I ended up doing is installing structlog manually, followed by each other dependency after trying and getting a dependency error. Wasn't too bad (assuming it works ok now- backend is running). |
Hello. Have troubles with frontend run. Followed guide: cloned, created a config file, run Run on Mac M1, but technically it should not be an issue. |
Hi @Pixelycia - thanks for the report, some users have reported issues with the current docker setup, i believe this is a fault on our side and i will look at it shortly. Issue is tracked at #114. |
@Pixelycia please check out the latest release (0.25.4) and see if that fixes the issue for you - you will need to run |
When using automatic1111 with a weak pc, i got into issues with timeout if my pc is slow in generating images, so i had to adjust the timeout, is there a failsafe to keep retrying until we get an image? Also can add options to change the sampler index for some models? |
Thanks for the feedback, added two new tickets to track this request I personally use ComfyUI so i havent looked at A1111 in a while - if A1111 users can list some the most vital things to expose in the ticket that'd be helpful - with the caveat that i want to avoid overloading the agent config UX in talemate with options, so id prefer to be picky what to add. |
When I get back home, I'll add some suggestions, been trying out comfyui too and having fun learning the underlying mechanism of using adetailer but I think most people who will be using this would prefer to use auto1111 due to the gui. |
General catch all ticket for installation issues in this early stage of development.
The text was updated successfully, but these errors were encountered: