Skip to content

Commit

Permalink
docs: add model_name field to ollama completion configuration (#2542)
Browse files Browse the repository at this point in the history
* docs: add model_name field to ollama completion configuration

* fix document for openai/chat
  • Loading branch information
wsxiaoys authored Jun 28, 2024
1 parent 894a4c6 commit 91b9536
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions website/docs/administration/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ For setting up the `ollama` model, apply the configuration below:
```toml
[model.completion.http]
kind = "ollama/completion"
model_name = "codellama:7b"
api_endpoint = "http://localhost:8888"
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>" # Example prompt template for CodeLlama model series.
```
Expand Down Expand Up @@ -62,9 +63,9 @@ For `local` configuration, use:
model_id = "StarCoder2-3B"
```

#### http
#### openai/chat

For `HTTP` configuration, the settings are as follows:
To configure Tabby's chat functionality with an OpenAI-compatible chat model (`/v1/chat/completions`), apply the settings below. This example uses the API platform of DeepSeek. Similar configurations can be applied for other LLM vendors such as Mistral, OpenAI, etc.

```toml
[model.chat.http]
Expand Down

0 comments on commit 91b9536

Please sign in to comment.