Releases: modelscope/modelscope-agent
Releases · modelscope/modelscope-agent
v0.5.2 release
feature
- add new multi role chat by @lcl6679292 in #445
fix
- fix: install default nltk data by @suluyana in #446
- hotfix transformers version 4.41 by @zzhangpurdue in #447
- allow_fork_pass_ci by @zzhangpurdue in #449
Full Changelog: v0.5.1...v0.5.2
v0.5.1 release
Features
- support GPT-4o and openai multi-modal compatible by @Zhikaiiii in #437
Bug Fixing
- fix tool import bugs by @zzhangpurdue in #438
- fix modelsope local deployment by @zhangsibo1129 in #433
New Contributors
- @zhangsibo1129 made their first contribution in #433
Full Changelog: v0.5.0...v0.5.1
v0.5.0 release
What's Changed
Features
- Add Assistant API server with
v1/chat/completion
for tool calling, andv1/assistant/lite
for agent running. - Add Tool Manager API server, allow user executes utilities in isolated, secure containers.
- Add rag workflow based on llama-index
- Add automatic stop words finding for different LLM's special token.
- Support llama3.
Full Changelog: v0.4.1...v0.5.0
v0.4.1 release
What's Changed
Features
- 🔥 The Ray version of multi-agent solution is on modelscope-agent, please find the document
- update distributed instantiation method for multi-agent method to allow multi user run case in a same ray cluster
- fix bugs introduced by multi-agent
Demos & Apps
- multi-role chatroom use case support multi user & update prompt for multi-agent
Full Changelog: https://github.com/modelscope/modelscope-agent/commits/v0.4.1
v0.4.0 release
What's Changed
Features
- 🔥 The Ray version of multi-agent solution is on modelscope-agent, please find the document
- add a simple server api in
apps/agentfabric
, which is also the api running on modelscope studio - add
tool lazy load
, so that it will not load all tools at starting. - add
token count
in memory module. - update agent loop logic make the result much more solid.
Demos & Apps
- multi-role chatroom and videogen by multi agent in apps based multi-agent framework
- using local model as llm by vllm in demos
- multi-round example with memory in demos
Full Changelog: https://github.com/modelscope/modelscope-agent/commits/v0.4.0