Deployment tool for grout.
The tool pulls grout data (tile databases) from packit API according to its configuration and runs the grout server in docker. Data is bind mounted into the server container.
A proxy is not included in the tool - the proxy must be configured separately.
This project is built with hatch. It is not published to PyPI, so must be run using hatch from local source.
Clone this repo, then: hatch shell
before using the deploy tool as below. (You can exit the hatch shell with exit
.)
Usage:
grout start [--pull] [--refresh] [<configname>]
grout stop [--delete]
Options:
--pull Pull docker image before starting
--refresh Refresh all data even if dataset/level is already downloaded (source location may have changed)
--delete Delete all data locally when pull down container
Config files are in the config
folder - there is currently only one configuration, called "grout", so you would
do a first-time start with grout start --pull grout
.
Once a configuration is set during start, it will be reused by subsequent commands. The configuration usage information
is stored in config/.last_deploy
.
During deployment, you'll be prompted to authenticate with packit via github.
After deployment, the server will be available on http://localhost:5000
.
The config file has these sections:
docker
- properties of the docker image, running container name and port to use for the grout serverpackit_servers
- to avoid repeating packit server urls for every database location, this section defines packit server names and associates them with urls.datasets
- defines a dictionary of datasets by name e.g. "gadm41", and within each dataset, admin levels for which tile databases are available e.g. "admin0". For each level, the packit server name, packit id and download file name are configured. The configured dataset and level keys determine the local file paths, and therefore the grout urls at which the tile data will be available e.g. level "admin0" in dataset "gadm41" will be downloaded to/data/gadm41/admin0.mbtiles
.
The project uses constellation as a dependency, but only for the config utils it provides - we do not run the docker image as a constellation.
pyorderly is used to authenticate with packit, but not for pulling orderly data - for individual files it's simpler to just access the packit endpoint with the access token received on authentication.
Run tests with hatch test
.
In order to run integration tests which need to non-interactively authenticate with packit, set GITHUB_ACCESS_TOKEN
environment variable to a PAT with access to https://packit.dide.ic.ac.uk/reside/
.
Run linting with automatic fixes with hatch fmt
. To check linting only, with no file changes, use hatch fmt --check
.