Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
fractalego authored Aug 26, 2023
1 parent ebab532 commit 92f30c4
Showing 1 changed file with 6 additions and 3 deletions.
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,12 @@ Please see the examples in the following chapters.

## LLM side (needs a GPU)

The second part is a server that runs on a mmachine accessible from the interface side.
This last machine will need to have a GPU to run the Large Language Model at a convenient speed.
This part can be run using a docker image by running the script
The second part is a machine that runs on a machine accessible from the interface side.
The initial configuration is for a local deployment of language models.
No action is needed to run WAFL if you want to run it as a local instance.

However, a multi-user setup will benefit for a dedicated server.
In this case, a docker image can be used

```bash
$ docker run -p8080:8080 --env NVIDIA_DISABLE_REQUIRE=1 --gpus all fractalego/wafl-llm:latest
Expand Down

0 comments on commit 92f30c4

Please sign in to comment.