This is a LlamaIndex project using FastAPI bootstrapped with create-llama
.
First, setup the environment:
poetry install
poetry shell
By default, we use the OpenAI LLM (though you can customize, see app/api/routers/chat.py). As a result you need to specify an OPENAI_API_KEY
in an .env file in this directory.
Example backend/.env
file:
OPENAI_API_KEY=<openai_api_key>
Second, run the development server:
python main.py
Then call the API endpoint /api/chat
to see the result:
curl --location 'localhost:8000/api/chat' \
--header 'Content-Type: application/json' \
--data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
You can start editing the API by modifying app/api/routers/chat.py
. The endpoint auto-updates as you save the file.
Open http://localhost:8000/docs with your browser to see the Swagger UI of the API.
The API allows CORS for all origins to simplify development. You can change this behavior by setting the ENVIRONMENT
environment variable to prod
:
ENVIRONMENT=prod uvicorn main:app
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex.
You can check out the LlamaIndex GitHub repository - your feedback and contributions are welcome!