Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在使用agent功能时发生错误 KeyError: <xinference.core.scheduler.InferenceRequest object at 0x7fbd5d699ae0> #5100

Open
lizeyu3344 opened this issue Nov 22, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@lizeyu3344
Copy link

问题描述 / Problem Description
在使用agent功能时发生错误 KeyError: <xinference.core.scheduler.InferenceRequest object at 0x7fbd5d699ae0>

复现问题的步骤 / Steps to Reproduce

  1. 执行 chatchat start -a
  2. 点击 启用agent 展示agent 并且选定工具
  3. 提问 37+48=?
  4. 问题出现 / Problem occurs
    无法正常询问,并且报错

预期的结果 / Expected Result
[描述应该出现的结果 / Describe the expected result.](data: {"id": "chatea86a400-5e5e-4213-bdd9-801f4bcd3e68", "object": "chat.completion.chunk", "model": "qwen2.5-instruct", "created": 1732265043, "status": 1, "message_type": 1, "message_id": null, "is_ref": false, "choices": [{"delta": {"content": "", "tool_calls": []}, "role": "assistant"}]})

实际结果 / Actual Result
2024-11-22 16:44:05,099 httpx 65430 INFO HTTP Request: POST http://10.132.66.183:9997/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-11-22 16:44:05,100 openai._base_client 65430 INFO Retrying request to /chat/completions in 0.491127 seconds 2024-11-22 16:44:07,568 httpx 65430 INFO HTTP Request: POST http://10.132.66.183:9997/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-11-22 16:44:07,568 openai._base_client 65430 INFO Retrying request to /chat/completions in 0.923493 seconds 2024-11-22 16:44:10,474 httpx 65430 INFO HTTP Request: POST http://10.132.66.183:9997/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-11-22 16:44:10.476 | ERROR | chatchat.server.api_server.openai_routes:get_model_client:61 - failed when request to ('qwen2.5-instruct', 'xinference') INFO: 127.0.0.1:52808 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-11-22 16:44:10,481 httpx 65430 INFO HTTP Request: POST http://127.0.0.1:7861/v1/chat/completions "HTTP/1.1 200 OK" 2024-11-22 16:44:10.483 | ERROR | chatchat.server.utils:wrap_done:46 - AttributeError: Caught exception: 'NoneType' object has no attribute 'dict'
环境信息 / Environment Information

  • Langchain-Chatchat 版本 / commit 号:0.3.1
  • 部署方式(pypi 安装 / 源码部署 / docker 部署):pypi 安装 / Deployment method (pypi installation / dev deployment / docker deployment): pypi installation
  • 使用的模型推理框架(Xinference / Ollama / OpenAI API 等):Xinference / Model serve method(Xinference / Ollama / OpenAI API, etc.): Xinference
  • 使用的 LLM 模型(GLM-4-9B / Qwen2-7B-Instruct 等): Qwen2.5-7B-Instruct
  • 使用的 Embedding 模型(bge-large-zh-v1.5 / m3e-base 等):BGE-M3
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
  • 操作系统及版本 / Operating system and version: LINUX
  • Python 版本 / Python version: 3.10
  • 推理使用的硬件(GPU / CPU / MPS / NPU 等) / Inference hardware (GPU / CPU / MPS / NPU, etc.): GPU
  • 其他相关环境信息 / Other relevant environment information:

使用的时候同时 xinference会报错
Traceback (most recent call last): File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/xinference/model/llm/transformers/utils.py", line 483, in batch_inference_one_step _batch_inference_one_step_internal( File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/xinference/model/llm/transformers/utils.py", line 449, in _batch_inference_one_step_internal invalid_token_num = decode_round - stop_token_mapping[r] KeyError: <xinference.core.scheduler.InferenceRequest object at 0x7fbd5d699ae0> 2024-11-22 16:44:05,094 xinference.core.model 11678 ERROR [request ed026bd0-a8ad-11ef-bbb6-525400588705] Leave chat, error: <xinference.core.scheduler.InferenceRequest object at 0x7fbd5d699ae0>, elapsed time: 1 s Traceback (most recent call last): File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/xinference/core/utils.py", line 78, in wrapped ret = await func(*args, **kwargs) File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/xinference/core/model.py", line 723, in chat return await self.handle_batching_request( File "/data/app/chatchat_3/.conda/envs/chatchat3/xinference/lib/python3.10/site-packages/xinference/core/model.py", line 706, in handle_batching_request result = await fut ValueError: <xinference.core.scheduler.InferenceRequest object at 0x7fbd5d699ae0>
已经困扰很久了,希望能够尽快答复

@lizeyu3344 lizeyu3344 added the bug Something isn't working label Nov 22, 2024
@lizeyu3344
Copy link
Author

能有一些解决办法吗 求求各位了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant