diff --git a/website/docs/administration/model.md b/website/docs/administration/model.md index 6faea4f552dd..8811c07ee0f5 100644 --- a/website/docs/administration/model.md +++ b/website/docs/administration/model.md @@ -33,6 +33,7 @@ For setting up the `ollama` model, apply the configuration below: ```toml [model.completion.http] kind = "ollama/completion" +model_name = "codellama:7b" api_endpoint = "http://localhost:8888" prompt_template = "
 {prefix} {suffix} "  # Example prompt template for CodeLlama model series.
 ```
@@ -62,9 +63,9 @@ For `local` configuration, use:
 model_id = "StarCoder2-3B"
 ```
 
-#### http
+#### openai/chat
 
-For `HTTP` configuration, the settings are as follows:
+To configure Tabby's chat functionality with an OpenAI-compatible chat model (`/v1/chat/completions`), apply the settings below. This example uses the API platform of DeepSeek. Similar configurations can be applied for other LLM vendors such as Mistral, OpenAI, etc.
 
 ```toml
 [model.chat.http]