Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
anticorrelator committed Jan 25, 2024
1 parent 575a3fd commit 71d556b
Showing 1 changed file with 9 additions and 11 deletions.
20 changes: 9 additions & 11 deletions python/examples/llama-index/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,29 +2,27 @@

This is a [LlamaIndex](https://www.llamaindex.ai/) project bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama) and instrumented using OpenInference.

## Getting Started
## Getting Started with Local Development

First, startup the backend as described in the [backend README](./backend/README.md).

Second, run the development server of the frontend as described in the [frontend README](./frontend/README.md).

Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.

## Running as a Docker Container
## Getting Started with Docker-Compose

Build the docker image:
Generate a Llama Index vector store index:

```shell
docker build -t llama-index-openinference-example .
```

Run the docker container:

```shell
docker run -p 3000:3000 -p 8000:8000 llama-index-openinference-example
cd ./backend
python app/engine/generate.py
cd ..
```

Access the UI at [http://localhost:3000](http://localhost:3000).
Ensure that Docker is installed and running. Run the command `docker compose up` to spin up services for the frontend, backend, and Phoenix. Once those services are running, open [http://localhost:3000](http://localhost:3000) to use the chat interface. When you're finished, run `docker compose down` to spin down the services.

Traces can be viewed using the [Phoenix UI](http://localhost:6006).

## Learn More

Expand Down

0 comments on commit 71d556b

Please sign in to comment.