Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation issues / clarifications [post here if you run into troubles] #17

Open
vegu-ai opened this issue Oct 13, 2023 · 39 comments
Open
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@vegu-ai
Copy link
Collaborator

vegu-ai commented Oct 13, 2023

General catch all ticket for installation issues in this early stage of development.

@vegu-ai vegu-ai added documentation Improvements or additions to documentation question Further information is requested labels Oct 13, 2023
@vegu-ai vegu-ai pinned this issue Oct 13, 2023
@Nroobz
Copy link

Nroobz commented Nov 19, 2023

Hiya mate. I'm having trouble getting the API to be applied to all the agents.

@vegu-ai
Copy link
Collaborator Author

vegu-ai commented Nov 19, 2023

@Nroobz hey - is it just assigning it to some agents and not all? What happens if you click the agent button under the client ?
image

@Nroobz
Copy link

Nroobz commented Nov 19, 2023

@final-wombat when I click the agent button i.e 'creator', a modal window opens with a drop-down labelled 'client' but theres no data available for the the dropdown.

error message below:

2023-11-19 17:29:38 [info ] frontend connected
2023-11-19 17:29:38 [debug ] frontend message action_type=request_app_config
2023-11-19 17:29:38 [info ] request_app_config
2023-11-19 17:30:14 [debug ] frontend message action_type=configure_clients
2023-11-19 17:30:14 [info ] Configuring clients clients=[{'name': 'TextGenWebUI', 'type': 'openai', 'apiUrl': 'http://localhost:5000', 'model_name': '', 'max_token_length': 4096, 'model': 'gpt-4-1106-preview'}]
2023-11-19 17:30:14 [error ] Error connecting to client client_name=TextGenWebUI e=TypeError("ClientBase.init() missing 1 required positional argument: 'api_url'")
server.py :242 2023-11-19 17:30:14,387 connection handler failed
Traceback (most recent call last):
File "D:\Git\TALEMATE\talemate-main\talemate_env\lib\site-packages\websockets\legacy\server.py", line 240, in handler
await self.ws_handler(self)
File "D:\Git\TALEMATE\talemate-main\talemate_env\lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler
return await cast(
File "D:\Git\TALEMATE\talemate-main\src\talemate\server\api.py", line 115, in websocket_endpoint
handler.configure_clients(data.get("clients"))
File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 210, in configure_clients
self.connect_llm_clients()
File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 83, in connect_llm_clients
self.connect_agents()
File "D:\Git\TALEMATE\talemate-main\src\talemate\server\websocket_server.py", line 99, in connect_agents
client = list(self.llm_clients.values())[0]["client"]
KeyError: 'client'
server.py :268 2023-11-19 17:30:14,388 connection closed
server.py :646 2023-11-19 17:30:14,735 connection open
2023-11-19 17:30:14 [info ] frontend connected
2023-11-19 17:30:14 [debug ] frontend message action_type=request_app_config
2023-11-19 17:30:14 [info ] request_app_config

@vegu-ai
Copy link
Collaborator Author

vegu-ai commented Nov 19, 2023

2023-11-19 17:30:14 [info ] Configuring clients clients=[{'name': 'TextGenWebUI', 'type': 'openai', 'apiUrl': 'http://localhost:5000', 'model_name': '', 'max_token_length': 4096, 'model': 'gpt-4-1106-preview'}]

@Nroobz can you open the client config and change the type from openai to textgenwebui - looks like the types got mixed up and its failing to save the client because of it

Edit: or vice versa i guess, if you're trying to run openai client. Either way it seems the client config ended up in a bugged state.

Once the client is setup correctly it should actually auto assign it self to all agents if its the only client. Then there is also the button in the client row that will assign it to all agents on click as well:

image

Edit: Not sure how it ended up with type openai when all the other parameters are from the textgenwebui client config - still trying to reproduce.

@vegu-ai
Copy link
Collaborator Author

vegu-ai commented Nov 19, 2023

Seems like editing clients in generally seems sorta buggy atm, especially if you change the type on an existing client, created a bug ticket and will work on that.

I can't even remove the edited client now.

I had to shutdown the backend and frontend and edit the config.yaml file to fix it. Let me know if you want to try that / need help with that.

@vegu-ai
Copy link
Collaborator Author

vegu-ai commented Nov 19, 2023

Found all sorts of issues looking into this, version 0.13.2 should fix it - let me know if not.

There also was a problem with openai clients specifically, that matched what you were seeing, so hopefully update will work for you.

@Nroobz
Copy link

Nroobz commented Nov 19, 2023

ok i'll get back to you asap with feedback

@Antrisa
Copy link

Antrisa commented Dec 9, 2023

problem starting backend
\Desktop\Ai chat thing\talemate-0.16.0\talemate-0.16.0\src\talemate\server\run.py", line 6, in
import structlog
ModuleNotFoundError: No module named 'structlog'

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Dec 9, 2023

@Antrisa Odd, try to run install.bat again and see if that reports any errors. Since it can't find structlog i am expecting that the install script failed somewhere.

@Antrisa
Copy link

Antrisa commented Dec 11, 2023

@Antrisa Odd, try to run install.bat again and see if that reports any errors. Since it can't find structlog i am expecting that the install script failed somewhere.

Note: This error originates from the build backend, and is likely not a problem with poetry but with safetensors (0.4.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "safetensors (==0.4.1)"'.

• Installing scipy (1.11.4): Installing...
• Installing starlette (0.27.0)
• Installing threadpoolctl (3.2.0)
• Installing tokenizers (0.15.0): Failed

ChefBuildError

the installer also says maybe because pip is 23.2. not 23.3
but the zip I have to update pip doesn't work?

What exactly are all the steps in detail to do before a user should press install.bat

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Dec 11, 2023

@Antrisa ideally you'd just have to install python and nodejs and install.bat does the rest. That's how i am testing on my end anyhow, but it could be i am missing some setup step somewhere, i might try spinning up a vm later to test a completely from scratch setup.

You can activate the talemate venv and manually upgrade pip and setuptools and see if that helps.

open a command window then run

talemate_env\Scripts\activate
python -m pip install pip setuptools -U
python -m poetry install

if it still fails it'd also be helpful for you to run the following

talemate_env\Scripts\activate
python -V
python -m pip freeze

@vegu-ai-tools
Copy link
Contributor

@Antrisa able to reproduce, looks like it is indeed missing some setup steps, will update once i know more

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Dec 11, 2023

Ok so i found two issues, i am not sure the second one applies to your case, but

  • python3.12 won't work, python3.10 and python3.11 do (downgrading to either and restarting the installation from scratch fixes the issue for me, e.g., confirm the correct python version via python -V then run reinstall.bat inside talemate) - the error during install matches what you saw, so hoping this will fix it on your end as well.
  • you may also need to install https://visualstudio.microsoft.com/visual-cpp-build-tools/ to grab a newer version of microsoft c++ (but it should show you a very vocal message during install in this case telling you Microsoft Visual C++ needs to be 14.0 or higher)

@vegu-ai-tools
Copy link
Contributor

0.16.1 released for some fixes to the install script for windows installations

@GabrielxScorpio
Copy link

Hello I am pretty new to this so this may be a dumb question.
I Installed a fresh Python 3.10.11, Node.js 21.6.1, and Talemate v0.18.0. When I started the bat file It gives me this error code:

Error [ERR_REQUIRE_ESM]: require() of ES Module C:\Program Files\nodejs\node_modules\npm\node_modules\strip-ansi\index.js from C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js not supported.
Instead change the require of C:\Program Files\nodejs\node_modules\npm\node_modules\strip-ansi\index.js in C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js to a dynamic import() which is available in all CommonJS modules.
at Object. (C:\Program Files\nodejs\node_modules\npm\node_modules\wide-align\node_modules\string-width\index.js:2:17) {
code: 'ERR_REQUIRE_ESM'
}

I tried restarting and reinstalling everything but I'm not sure how to fix it.

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Jan 30, 2024

Hi @GabrielxScorpio

Personally i had nodejs v19 installed, upgraded to v21 - since i had not tested with it yet - and it seems to work correctly (although it does give me a warning about some package not supporting it).

Can you do the following please:

  • windows key + r to open the run program prompt
  • enter cmd hit enter
  • type node -v
  • does it actually show v21 installed?
  • if it does could try downgrading to the LTS (v20) version instead

@GabrielxScorpio
Copy link

Thank you @vegu-ai-tools

I downgraded, and it worked!
I have gotten everything up to the point of loading the Infinity Quest and LM Studio. My issue now is getting a response from the AI. I tried multiple models on LM Studio, but after everything is loaded and I put any kind of input, it comes back with
"Unhandled Error: unsupported operand type(s) for -: 'NoneType' and 'int'"

Problem LM Studio Talemate 1

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Jan 30, 2024

@GabrielxScorpio Thanks for confirming that the downgrade fixed the issue, i've updated the readme.

I've seen that error before, but i haven't managed to track it down yet.

Although right now with LMStudio 0.2.12 loaded - using the same model - i can't reproduce it.

In the LMStudio view there should be a server log ticking by, what does it show when you try to generate dialogue?

I'd expect it to look something like this:

image

If it shows it generating, how fast is it?

Edit: also mind copy pasting the error message that appears in the backend process window for talemate, that'd help a lot tracking this down, whatever it is :)

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Jan 31, 2024

@GabrielxScorpio oh i think i see the issue

image

It seems like the context size for the client isn't set to anything - can you try setting it either via the slider or via the client settings dialogue?

in the end it should look like this with a number next to ctx

image

Note sure how it ended up being unset, i created a bug ticket for that.

@GabrielxScorpio
Copy link

@vegu-ai-tools
On Talemate, I tried moving the slider and changing the value from the settings, but it just forced it back to nothing, no number.

This is my Lm Studio page.

Full LM Error 1

This is a closer look at the log. Every couple of seconds, it repeats that.

Lm studio log error 1

This is what is showing up on the Talemate backend window every time the error appears.

Talemate backend error message

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Jan 31, 2024

@GabrielxScorpio thanks for the followup, issue is definitely the unset context length - i can reproduce and am working on a fix for a 18.1 release, will update here once its ready.

As a workaround, this seems to fix it until then (sort of, it will still revert back to whatever you set here if you try changing it):

  • closing talemate
  • opening config.yaml in a text editor
  • manually setting max_token_length: 4096 on the client
  • saving
  • starting talemate
clients:
  LMStudio:
    api_url: http://localhost:1234
    name: LMStudio
    type: lmstudio
    max_token_length: 4096

@vegu-ai-tools
Copy link
Contributor

@GabrielxScorpio fix released in https://github.com/vegu-ai/talemate/releases/tag/0.18.1

Hope this fixes the issue for you as well. Apologies for the rough start, personally i dont run against LM Studio so testing against it has been somewhat neglected.

@GabrielxScorpio
Copy link

@vegu-ai-tools The update fixed the error! Thank you.

I found another bug maybe? I had switched to using GPT4 turbo before your fix was released and it was not letting me continue with dialogue. But when I progress the story it answers back normally.

GPT4turbo error

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Feb 1, 2024

Woah, ill have to look at that - that's gpt-4 censoring itself. Given that there is nothing in the chat history that is even close to risky, it must be tripping over the system message talemate sends. I test gpt-4 turbo a lot, so that's new if so. Thanks for pointing it out.

@thephilluk
Copy link

thephilluk commented Feb 6, 2024

Hi there,
unsure if this is an installation error or (more likely) an user Error.
I am trying to use Talemate with the Chub AI.
but when entering the Data the Model isn't fetched and the following Error is printed in the Server Console (API Key changed)

The same settings do work for Sillytavern, so I know that they are good and there's a model accessible

2024-02-06 23:45:09 [info     ] Configuring clients            clients=[{'name': 'OpenAI Compatible API', 'type': 'openai_compat', 'api_url': 'https://mercury.chub.ai/mythomax/v1', 'model_name': 'mythomax', 'max_token_length': 4096, 'data': {}, 'model': 'mythomax', 'api_key': 'CHK-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}]
server.py           :242  2024-02-06 23:45:09,609 connection handler failed
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 240, in handler
    await self.ws_handler(self)
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler
    return await cast(
           ^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 119, in websocket_endpoint
    handler.configure_clients(data.get("clients"))
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 233, in configure_clients
    self.connect_llm_clients()
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 97, in connect_llm_clients
    client = self.llm_clients[client_name]["client"] = instance.get_client(
                                                       ^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\instance.py", line 55, in get_client
    client = cls(name=name, *create_args, **create_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 33, in __init__
    super().__init__(**kwargs)
  File "C:\AI\talemate-0.19.0\src\talemate\client\base.py", line 95, in __init__
    self.set_client(max_token_length=self.max_token_length)
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 41, in set_client
    self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key=self.api_key)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\openai\_client.py", line 296, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
server.py           :268  2024-02-06 23:45:09,612 connection closed
server.py           :646  2024-02-06 23:45:09,960 connection open
2024-02-06 23:45:09 [info     ] frontend connected

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Feb 6, 2024

@thephilluk the client auto-appends the /v1 path to the url - try removing that (so https://mercury.chub.ai/mythomax)

That's the only thing that jumps out at me.

That said the OpenAI compat client is fairly fresh and has received very limited testing, personally i've only tested it against the llamacpp openai wrapper, so its probably still finnicky. Let me know if fixing the url doesn't do anything.

Did some tests and found additional issues: tracked #76

@thephilluk
Copy link

@vegu-ai-tools tried that, got the following (changed) output:

2024-02-07 18:36:57 [info     ] Configuring clients            clients=[{'name': 'OpenAI Compatible API', 'type': 'openai_compat', 'api_url': 'https://mercury.chub.ai/mythomax', 'model_name': 'mythomax', 'max_token_length': 4096, 'data': {'template_file': 'Mythomax.jinja2', 'has_prompt_template': True, 'prompt_template_example': '### Instruction:\nsysmsg\n\n### Input:\nprompt\n\n### Response:\n{LLM coercion}'}, 'model': 'mythomax', 'api_key': 'CHK-XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'}]
server.py           :242  2024-02-07 18:36:57,095 connection handler failed
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 240, in handler
    await self.ws_handler(self)
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\server.py", line 1186, in _ws_handler
    return await cast(
           ^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 119, in websocket_endpoint
    handler.configure_clients(data.get("clients"))
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 233, in configure_clients
    self.connect_llm_clients()
  File "C:\AI\talemate-0.19.0\src\talemate\server\websocket_server.py", line 97, in connect_llm_clients
    client = self.llm_clients[client_name]["client"] = instance.get_client(
                                                       ^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\instance.py", line 55, in get_client
    client = cls(name=name, *create_args, **create_kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 33, in __init__
    super().__init__(**kwargs)
  File "C:\AI\talemate-0.19.0\src\talemate\client\base.py", line 95, in __init__
    self.set_client(max_token_length=self.max_token_length)
  File "C:\AI\talemate-0.19.0\src\talemate\client\openai_compat.py", line 41, in set_client
    self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key=self.api_key)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\openai\_client.py", line 296, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
server.py           :268  2024-02-07 18:36:57,096 connection closed
server.py           :646  2024-02-07 18:36:57,424 connection open
2024-02-07 18:36:57 [info     ] frontend connected
base_events.py      :1771 2024-02-07 18:36:57,443 Task exception was never retrieved
future: <Task finished name='Task-33' coro=<websocket_endpoint.<locals>.send_messages() done, defined at C:\AI\talemate-0.19.0\src\talemate\server\api.py:29> exception=ConnectionClosedError(None, Close(code=1011, reason=''), None)>
Traceback (most recent call last):
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1302, in close_connection
    await self.transfer_data_task
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 339, in __wakeup
    future.result()
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 198, in result
    raise exc
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 269, in __step
    result = coro.throw(exc)
             ^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 959, in transfer_data
    message = await self.read_message()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1029, in read_message
    frame = await self.read_data_frame(max_size=self.max_size)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1104, in read_data_frame
    frame = await self.read_frame(max_size)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 1161, in read_frame
    frame = await Frame.read(
            ^^^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\framing.py", line 68, in read
    data = await reader(2)
           ^^^^^^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\streams.py", line 733, in readexactly
    await self._wait_for_data('readexactly')
  File "C:\Program Files\Python311\Lib\asyncio\streams.py", line 526, in _wait_for_data
    await self._waiter
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 339, in __wakeup
    future.result()
  File "C:\Program Files\Python311\Lib\asyncio\futures.py", line 198, in result
    raise exc
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files\Python311\Lib\asyncio\tasks.py", line 267, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "C:\AI\talemate-0.19.0\src\talemate\server\api.py", line 37, in send_messages
    await websocket.send(json.dumps(message))
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 635, in send
    await self.ensure_open()
  File "C:\AI\talemate-0.19.0\talemate_env\Lib\site-packages\websockets\legacy\protocol.py", line 935, in ensure_open
    raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1011 (unexpected error); no close frame received

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Feb 7, 2024

@thephilluk i went ahead and checked against their api and eventually got it working - however i think there is definitely something broken / fragile with the client, which i will look into as part of #76, but after saving and retrying a couple times it ended up working. (see my client config below) - that said, i never got to see the error you are seeing, so unsure what that's all about still.

Bad news - from the little i testing i did with it just now, i am not sure their mythomax instance is on par with what is required to handle the more demanding parts (specifically world state stuff, which requires a lot of JSON accuracy tasks) in talemate. Seems like the experience will be suboptimal, alas.

image

@thephilluk
Copy link

@vegu-ai-tools Got it, thank you!
what service do you recommend for people with weaker PCs?

@vegu-ai-tools
Copy link
Contributor

@thephilluk good question :)

Personally i run local / rented gpu / official openai

I have not had the time to test with any other remote apis so can't make any direct recommendations, but anything hosting a 7B mistral or upwards should be capable of handling talemate, keeping in mind that until #76 is fixed, the issues we just ran into with chub.ai for setup may still exist on those apis as well.

Here is a list of models i currently test with (ranging 7B to 50B), that are all good

Kunoichi-7B
sparsetral-16x7B
Fimbulvetr-10.7B
dolphin-2.7-mixtral-8x7b
Mixtral-8x7B-instruct

Talemate does come with direct runpod support if you want to try the gpu rental route, but there is some setup involved, instructions here if you're interested: https://github.com/vegu-ai/talemate/blob/main/docs/runpod.md

@maxcurrent420
Copy link

maxcurrent420 commented Apr 5, 2024

Getting an error on the install script in Linux. It fails to install triton.

source install.sh
Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
bash: talemate_env/bin/activate: No such file or directory

[installs other dependencies]

`

  • Installing triton (2.2.0): Failed

RuntimeError

Hash for triton (2.2.0) from archive triton-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl not found in known hashes (was: sha256:63f9fbd31ba01fab81a80334d963e28da8e2f5d4ba532b71c62246d7ba5ddf12)

at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/executor.py:808 in _validate_archive_hash
804│
805│ archive_hash = f"{hash_type}:{get_file_hash(archive, hash_type)}"
806│
807│ if archive_hash not in known_hashes:
→ 808│ raise RuntimeError(
809│ f"Hash for {package} from archive {archive.name} not found in"
810│ f" known hashes (was: {archive_hash})"
811│ )
812│

Cannot install triton.
`

I tried removing the lock file, deleting the poetry cache, re-installing poetry all to no avail so far.
I tried just running the backend anyway and got the struct log error someone posted above.

Also, thanks for your hard work!

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented Apr 5, 2024

@maxcurrent420 looks like there are multiple things going on here

Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
bash: talemate_env/bin/activate: No such file or directory

first issue appears to be its never actually creating the virtual env.

can you try editing the file and changing python to python3 and then running it again.

i think your case poetry just rolls a venv on its own anyways, so unsure this would do anything for your error, but still worth trying to see what happens if the first error is resolved, since it actually does a fresh poetry install into that venv.

I am not able to reproduce - this does scream poetry cache issue to me, but sounds like you already went down that path, so not sure.

one thing you could try is switching to the prep-0.23.0 branch and try running it through docker.

Edit: did a bunch of edits for clarity

@maxcurrent420
Copy link

maxcurrent420 commented Apr 7, 2024

@maxcurrent420 looks like there are multiple things going on here

Command 'python' not found, did you mean:
command 'python3' from deb python3
command 'python' from deb python-is-python3
bash: talemate_env/bin/activate: No such file or directory

first issue appears to be its never actually creating the virtual env.

can you try editing the file and changing python to python3 and then running it again.

i think your case poetry just rolls a venv on its own anyways, so unsure this would do anything for your error, but still worth trying to see what happens if the first error is resolved, since it actually does a fresh poetry install into that venv.

I am not able to reproduce - this does scream poetry cache issue to me, but sounds like you already went down that path, so not sure.

one thing you could try is switching to the prep-0.23.0 branch and try running it through docker.

Edit: did a bunch of edits for clarity

I did change it to python3 but what I ended up doing is installing structlog manually, followed by each other dependency after trying and getting a dependency error. Wasn't too bad (assuming it works ok now- backend is running).

@Pixelycia
Copy link

Hello. Have troubles with frontend run.

Followed guide: cloned, created a config file, run docker compose up, frontend does not run with the error "sh: 1: vue-cli-service: not found"

Run on Mac M1, but technically it should not be an issue.

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented May 18, 2024

Hi @Pixelycia - thanks for the report, some users have reported issues with the current docker setup, i believe this is a fault on our side and i will look at it shortly. Issue is tracked at #114.

@vegu-ai-tools
Copy link
Contributor

vegu-ai-tools commented May 18, 2024

@Pixelycia please check out the latest release (0.25.4) and see if that fixes the issue for you - you will need to run docker compose build to rebuild

@MiyeonLin
Copy link

When using automatic1111 with a weak pc, i got into issues with timeout if my pc is slow in generating images, so i had to adjust the timeout, is there a failsafe to keep retrying until we get an image? Also can add options to change the sampler index for some models?

@vegu-ai-tools
Copy link
Contributor

When using automatic1111 with a weak pc, i got into issues with timeout if my pc is slow in generating images, so i had to adjust the timeout, is there a failsafe to keep retrying until we get an image? Also can add options to change the sampler index for some models?

Thanks for the feedback, added two new tickets to track this request

I personally use ComfyUI so i havent looked at A1111 in a while - if A1111 users can list some the most vital things to expose in the ticket that'd be helpful - with the caveat that i want to avoid overloading the agent config UX in talemate with options, so id prefer to be picky what to add.

@MiyeonLin
Copy link

When I get back home, I'll add some suggestions, been trying out comfyui too and having fun learning the underlying mechanism of using adetailer but I think most people who will be using this would prefer to use auto1111 due to the gui.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

8 participants