-
Notifications
You must be signed in to change notification settings - Fork 0
Setting up a production deployment
- Define a name
- Set up Auth providers
- Set up your DNS names
- Get the required SSL certificates from letsencrypt
- Generate dhparams.pem for nginx
- Setup the docker configuration
- First time setup
- Starting the environment
- Stopping the environment
- Restarting the environment
- Scaling out the service with kubernetes
Example configuration referenced in this document is @ https://github.com/CoEDL/nyingarn-workspace/tree/master/production.
Architecture Schematic:
You need to determine the URL where this app is to be hosted before you do anything else. The config
files and documentation will refer to workspace.nyingarn.net
- adjust as required.
{SERVICE NAME}
will be used to refer to this name in the documentation.
If you want to use AAF authentication you will need to email the following to support. Be sure to change the placeholders as required:
More information: https://support.aaf.edu.au/support/solutions/articles/19000096640-openid-connect-
* redirect url: https://{SERVICE NAME}/callback-aaf-login
* descriptive name: {DESCRIPTIVE NAME FOR THIS SERVICE - e.g. nyingarn platform}
* org name: {ORGANISATION - the AAF org this service will be associated to}
* purpose: production
* keybase account: {YOUR KEYBASE USERNAME - this is how they pass you secrets}
Set up an OIDC via the google API Console. See https://developers.google.com/identity/protocols/oauth2/openid-connect
- Go to credentials tab and create a new OAuth 2.0 Client
- Name as you wish:
{SERVICE NAME} makes sense
- Authorised Javascript Origins:
https://{SERVICE NAME}
- Authorised redirect URIs:
https://{SERVICE NAME}/callback-google-login
This guide assumes you are running all of the containers on a single host or in a cluster behind a single floating IP. Adjust accordingly if this is not the case.
Set up DNS records for:
- {SERVICE NAME}
- s3.{SERVICE DOMAIN NAME}
- s3-console.{SERVICE DOMAIN NAME}
- tus.{SERVICE DOMAIN NAME}
- describo.{SERVICE DOMAIN NAME}
Be sure to replace:
-
{your email}
with your email address -
{folder}
with a path to a folder you can mount into a docker container. You can use the default of letsencrypt but I find it useful to keep the letsencrypt config near the rest of my docker config which is usually somewhere in /srv.
> \sudo certbot certonly --standalone --agree-tos -m {your email} --config-dir {folder} -n -d {SERVICE NAME}
> \sudo certbot certonly --standalone --agree-tos -m {your email} --config-dir {folder} -n -d s3.{SERVICE DOMAIN NAME}
> \sudo certbot certonly --standalone --agree-tos -m {your email} --config-dir {folder} -n -d s3-console.{SERVICE DOMAIN NAME}
> \sudo certbot certonly --standalone --agree-tos -m {your email} --config-dir {folder} -n -d tus.{SERVICE DOMAIN NAME}
> \sudo certbot certonly --standalone --agree-tos -m {your email} --config-dir {folder} -n -d describo.{SERVICE DOMAIN NAME}
Generate Diffie-Hellman params for nginx in the docker folder with the rest of the configuration.
> openssl dhparam -out dhparam.pem 2048
- Copy the production folder to your server
- adjust the endpoint names in the
*nginx*
files to match the names you defined above. - setup up the required variables - create the .env file and update the service configuration files
Copy env to .env
and edit the variables. Whilst you're at it you can edit the configuration files
described in the next section.
Describo specific configuration is in configuration-describo.json
Workspace specific configuration is in configuration-nyingarn.json
In order to run the environment some configuration is needed as environment variables in the docker
compose files. This configuration is kept in the filei .env
in this directory.
Docker compose automatically looks for this file when starting up services and joins in the content. Specifically, database credentials are found in the file. Though some information is found in there and the service specific configurations identified above.
(Think of it like this: the describo and workspace api applications will read a JSON configuration at startup to configure themselves. Other services like the tus upload endpoint are thirs party services that are configured via env variables. Hence why some configuration is in .env and the service specific configuration files).
You'll notice that there are 3 docker compose files. The file docker-compose-01-base-services.yml
starts up the database and minio containers. When setting up the environment for the first time, run
this compose file manually to start those services then do the following:
Exec into the db container and connect to the postgres service to setup the required databases.
> docker exec it ${container id} bash
> psql postgres
$ create database nyingarn;
$ create database describo;
>
The postgres user is root and the password will be whatever you set in the .env file.
In order for tus to talk to minio you need to create service accounts. To do this:
- Log in to minio @ http://{fqdn:10001}
-
Create Service Account
on the Service Accounts tab - Note the
Access Key
andSecret Key
in the docker compose file on the tus container atAWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
respectively. Set them in the .env file.
./workspace-init.sh start
Starts the base services, describo and then the workspace. The workspace provides the edge nginx server with all of the domain proxy endpoints.
./workspace-init.sh stop
./workspace-init.sh restart
Whilst not actually tested it should possible to scale out each of the pieces independently. Just pull it apart into separate compose sets.