Skip to content

Commit

Permalink
correct vllm.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Andrew Lapp committed Feb 21, 2024
1 parent 0925513 commit a6cba64
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/reference/vllm.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ You can then start the server with:
python -m outlines.serve.serve --model="mistralai/Mistral-7B-Instruct-v0.2"
```

This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though) with the OPT-125M model. If you want to specify another model (e.g. Mistral-7B-Instruct-v0.2), you can do so with the `--model` parameter.
This will by default start a server at `http://127.0.0.1:8000` (check what the console says, though). Without the `--model` argument set, the OPT-125M model is used. The `--model` argument allows you to specify any model of your choosing.

### Alternative Method: Via Docker

Expand Down

0 comments on commit a6cba64

Please sign in to comment.