The microservice for executing First-Order-Logic commands generated by LLMs
In case you use any of the components for your research, please refer to (and cite) the papers:
Creating Local World Models using LLMs. M. D. Longar, E. Novak, M. Grobelnik. Slovenian KDD Conference, Ljubljana, 2024.
Before starting the project make sure these requirements are available:
- python. For running the app. Alternatively, one can use conda (see next point).
- prolog. For executing the logic commands.
- conda. For setting up your research environment and python dependencies.
- git. For versioning your code.
This app was developed and tested using Python 3.10 and Prolog 9.2.5.
First create the virtual environment where the service will store all the modules.
Using the venv
command, run the following commands:
# create a new virtual environment
python -m venv venv
# activate the environment (UNIX)
. ./venv/bin/activate
# activate the environment (WINDOWS)
./venv/Scripts/activate
# deactivate the environment (UNIX & WINDOWS)
deactivate
Install conda, a program for creating python virtual environments. Then run the following commands:
# create a new virtual environment
conda create --name hai-learning python=3.10 pip
# activate the environment
conda activate hai-learning
# deactivate the environment
deactivate
To install the project run
pip install -e .
To start the app in development mode, run the following command in the terminal:
uvicorn app.main:app --port 4000 --reload
This will start the app and listen it on port 4000.
To see the API documentation, visit either:
URL | Description |
---|---|
http://127.0.0.1:4000/docs | Automatic iteractive API documentation (Swagger UI) |
http://127.0.0.1:4000/redoc | Alternative automatic documentation (ReDoc) |
To dockerize the REST API, run the following command:
# build the docker image
docker build -t hai-learning .
# run the docker container
docker run -d --name hai-learning -p 4000:4000 hai-learning
To change the port number, modify the last line of the Dockerfile
file.