-
Notifications
You must be signed in to change notification settings - Fork 8
Docker
Running Insights Explorer with Docker allows you to easily spin up an instance without dealing with dependencies or building the project. It isolates it to a container without exposing it to the rest of your system.
This is the recommended way to install and use Insights Explorer. Official Docker images are automatically build and hosted on GitHub.
Make sure you have Docker installed and set up on your machine before following these instructions. If you don't already have Docker installed, follow the official install instructions for Linux, macOS, or Windows here: https://docs.docker.com/install/#supported-platforms.
The following Docker images are available:
Image | Purpose |
---|---|
insights-explorer | Main service; serves the frontend and GraphQL backend |
insights-explorer-convertbot | Convertbot: provides file format conversions |
insights-explorer-slackbot | Slackbot: connects to Slack to provide unfurling |
An example docker-compose.yml
is included in the project root. It includes Insights Explorer and all dependencies, and is pre-configured to be as easy to run as possible.
docker-compose.yml
is intended for dev/test usage and is not recommended for production use as-is. It doesn't provide data redundancy, doesn't use HTTPS, and has hard-coded credentials.
While mostly pre-configured, there are a few environment variables that must be configured prior to running the docker-compose.yml
.
- Provide a GitHub Service Account
- Provide a GitHub Organization
- Provide an OAuth provider; GitHub OAuth is free and easy
The docker-compose.yml
provides several services, grouped into different profiles to make it easy to customize.
Launch all services using the profile all
:
docker compose --profile all up
Containers can be started in detached mode by adding the -d
flag:
docker compose --profile all up -d
Once the containers have started, you can access the following URLs:
- Insights Explorer: http://localhost:3001
- Kibana: http://localhost:5601
- Minio Console: http://localhost:9001 (user
minio
/ passwordminio123
)
All running containers can be stopped with Ctrl-C
, or when detached:
docker compose --profile all down
This doesn't remove the data volumes (see below).
🎯 Tip: Use the same --profile
flag when running docker compose down
or you may get an error
Services in the docker-compose.yml
file are grouped into a few profiles. This makes it easy to run a subset of services without modifying the file.
-
all
: Runs all services -
iex
: Runs just Insights Explorer services, without any dependencies -
infra
: Runs the infrastructure dependencies (PostgreSQL, Elasticsearch, Minio) -
extras
: Runs optional services (e.g. Kibana)
One scenario where this is useful is for local development on IEX itself:
docker compose --profile infra up -d
npm start
Configure the .env.development.local
file to connect to the running services on localhost
.
Multiple profiles can be run by specifying --profile
twice:
docker compose --profile infra --profile extras up
In addition, there are single-service profiles: es
, db
, minio
, kibana
.
Four persistent volumes are created:
-
iex_db_data
: Contains the PostgreSQL database -
iex_es_data
: Contains Elasticsearch data -
iex_kibana_data
: Contains Kibana data -
iex_minio_data
: Contains the S3-compatible object storage
These volumes persist by default even after the containers are stopped. This maintains state if the containers are restarted later.
Use docker compose down --volumes
to remove them for a clean start.