Skip to content

Commit

Permalink
docs: fix p2p commands (#2472)
Browse files Browse the repository at this point in the history
Also change icons on GPT vision page

Signed-off-by: Ettore Di Giacinto <[email protected]>
  • Loading branch information
mudler authored Jun 3, 2024
1 parent bae2a64 commit 148adeb
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/content/docs/features/distributed_inferencing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This functionality enables LocalAI to distribute inference requests across multi
To start workers for distributing the computational load, run:

```bash
local-ai llamacpp-worker <listening_address> <listening_port>
local-ai worker llama-cpp-rpc <listening_address> <listening_port>
```

Alternatively, you can build the RPC server following the llama.cpp [README](https://github.com/ggerganov/llama.cpp/blob/master/examples/rpc/README.md), which is compatible with LocalAI.
Expand Down Expand Up @@ -71,7 +71,7 @@ To reuse the same token later, restart the server with `--p2ptoken` or `P2P_TOKE
2. Start the workers. Copy the `local-ai` binary to other hosts and run as many workers as needed using the token:

```bash
TOKEN=XXX ./local-ai p2p-llama-cpp-rpc
TOKEN=XXX ./local-ai worker p2p-llama-cpp-rpc
# 1:06AM INF loading environment variables from file envFile=.env
# 1:06AM INF Setting logging to info
# {"level":"INFO","time":"2024-05-19T01:06:01.794+0200","caller":"config/config.go:288","message":"connmanager disabled\n"}
Expand Down
2 changes: 1 addition & 1 deletion docs/content/docs/features/gpt-vision.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

+++
disableToc = false
title = "🆕 GPT Vision"
title = "🥽 GPT Vision"
weight = 14
url = "/features/gpt-vision/"
+++
Expand Down

0 comments on commit 148adeb

Please sign in to comment.