diff --git a/website/docs/llms/ollama.md b/website/docs/llms/ollama.md index 10a09bd5..3aa4e0df 100644 --- a/website/docs/llms/ollama.md +++ b/website/docs/llms/ollama.md @@ -14,4 +14,5 @@ We recommend deploying the LLM with a parameter scale exceeding 13 billion for e } ``` NOTE: `llm.api_base` is the URL started in the Ollama LLM server and `llm.model` is the model name of Ollama LLM. -3. Start TaskWeaver and chat with TaskWeaver. \ No newline at end of file +3. Start TaskWeaver and chat with TaskWeaver. +You can refer to the [Quick Start](../quickstart.md) for more details. diff --git a/website/docs/llms/openai.mdx b/website/docs/llms/openai.mdx index 057cdff3..f4722b94 100644 --- a/website/docs/llms/openai.mdx +++ b/website/docs/llms/openai.mdx @@ -13,14 +13,50 @@ import TabItem from '@theme/TabItem'; ```mdx-code-block + 1. Create an account on [OpenAI](https://beta.openai.com/) and get your API key. + 2. Add the following to your `taskweaver_config.json` file: ```json - {"api_key": xxx} + { + "llm.api_type":"openai", + "llm.api_base": "https://api.openai.com/v1", + "llm.api_key": "YOUR_API_KEY", + "llm.model": "gpt-4-1106-preview" + "llm.response_format": "json_object" + } ``` + 💡`llm.model` is the model name you want to use. + You can find the list of models [here](https://platform.openai.com/docs/models). + + 💡For `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`, `llm.response_format` can be set to `json_object`. + However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`. + + 3. Start TaskWeaver and chat with TaskWeaver. + You can refer to the [Quick Start](../quickstart.md) for more details. + - This is AOAI + 1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and get your API key. + 2. Add the following to your `taskweaver_config.json` file: + ```json + { + "llm.api_base":"YOUR_AOAI_ENDPOINT", + "llm.api_key":"YOUR_API_KEY", + "llm.api_type":"azure", + "llm.auth_mode":"api-key", + "llm.model":"gpt-4-1106-preview", + "llm.response_format": "json_object" + } + ``` + 💡`llm.model` is the model name you want to use. + You can find the list of models [here](https://platform.openai.com/docs/models). + + 💡For `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`, `llm.response_format` can be set to `json_object`. + However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`. + + 3. Start TaskWeaver and chat with TaskWeaver. + You can refer to the [Quick Start](../quickstart.md) for more details. ``` \ No newline at end of file diff --git a/website/docs/llms/qwen.md b/website/docs/llms/qwen.md index 0436f2bc..f98032a5 100644 --- a/website/docs/llms/qwen.md +++ b/website/docs/llms/qwen.md @@ -15,4 +15,5 @@ NOTE: `llm.model` is the model name of QWen LLM API. You can find the model name in the [QWen LLM model list](https://help.aliyun.com/zh/dashscope/developer-reference/model-square/?spm=a2c4g.11186623.0.0.35a36ffdt97ljI). -4. Start TaskWeaver and chat with TaskWeaver. \ No newline at end of file +4. Start TaskWeaver and chat with TaskWeaver. +You can refer to the [Quick Start](../quickstart.md) for more details.