Skip to content
This repository has been archived by the owner on Jul 18, 2024. It is now read-only.

Commit

Permalink
Merge pull request #33 from defenseunicorns/better-readme-instructions
Browse files Browse the repository at this point in the history
better README instructions and descriptions
  • Loading branch information
justinthelaw authored Dec 13, 2023
2 parents 60afdde + df9476e commit b50424c
Showing 1 changed file with 22 additions and 14 deletions.
36 changes: 22 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,39 +2,47 @@

## Description

API to expose LLM backends via FastAPI and gRPC.
A Python API that exposes LLM backends, via FastAPI and gRPC, in the OpenAI API specification.

## Usage

See [instructions](#instructions) to get the API up and running. Then, go to http://localhost:8080/docs for the Swagger documentation on API usage.

## Instructions

Choose a LeapfrogAI model backend and get that running separately. Conisder:
* https://github.com/defenseunicorns/leapfrogai-backend-ctransformers
* https://github.com/defenseunicorns/leapfrogai-backend-whisper
The instructions in this section assume the following:

1. Properly installed and configured Python 3.11.x, to include its development tools
2. The `config.yaml` is created based on the `config-example.yaml`
3. You have chosen a LeapfrogAI model backend and have that running. Some examples of existing backends:

#### Run Locally
- https://github.com/defenseunicorns/leapfrogai-backend-ctransformers
- https://github.com/defenseunicorns/leapfrogai-backend-whisper

1. Setup `config.yaml`, see `config-example.yaml` for common examples.
### Local Development

2. Setup and run the API
For cloning a model locally and running the development backend.

```bash
# Setup Python Virtual Environment
python -m venv .venv
source .venv/bin/activate
make create-venv
make activate-venv
make requirements-dev

# Start Model Backend
make dev
```

### Docker Run
### Docker Container

#### Image Build and Run

For local image building and running.

```bash
# Build the docker image
docker build -t ghcr.io/defenseunicorns/leapfrogai/leapfrogai-api:latest .

# Run the docker container
docker run -p 8080:8080 ghcr.io/defenseunicorns/leapfrogai/leapfrogai-api:latest
docker run -p 8080:8080 -v ./config.yaml:/leapfrogai/config.yaml ghcr.io/defenseunicorns/leapfrogai/leapfrogai-api:latest
```

1. Create `config.yaml`, see `config-example.yaml` for common examples.
2. Copy `config.yaml` into the work directory of the container while it is running.

0 comments on commit b50424c

Please sign in to comment.