Commands and notes to deploy MET to a Openshift environment.
Github actions are being used for building images but IF NECESSARY to use openshift, follow the steps below:
In the tools namespace use the following to create the build configurations:
oc process -f ./web.bc.yml | oc create -f -
oc process -f ./api.bc.yml | oc create -f -
oc process -f ./notify-api.bc.yml | oc create -f -
oc process -f ./cron.bc.yml | oc create -f -
oc process -f ./met-analytics.bc.yml | oc create -f -
oc process -f ./analytics-api.bc.yml | oc create -f -
Allow image pullers from the other namespaces to pull images from tools namespace:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: 'system:image-puller'
namespace: e903c2-tools
subjects:
- kind: ServiceAccount
name: default
namespace: e903c2-dev
- kind: ServiceAccount
name: default
namespace: e903c2-test
- kind: ServiceAccount
name: default
namespace: e903c2-prod
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:image-puller'
Inntall an instance of patroni using helm chart:
helm repo add patroni-chart https://bcgov.github.io/nr-patroni-chart
helm install -n <namespace> met-patroni patroni-chart/patroni
If HA is not necessary create a instance of a postgresql database:
oc new-app --template=postgresql-persistent -p POSTGRESQL_DATABASE=app -p DATABASE_SERVICE_NAME=met-postgresql
- Users Setup script is located at /tools/postgres/init/00_postgresql-user-setup.sql
- Initial database setup script is located at /tools/postgres/init/01_postgresql-schema-setup.sql
- Openshift secret yaml is located at ./database-users.secret.yml
Backups are generated daily by the dc "backup" in the test and production realms and are composed by a SQL script containing the database structure + data.
To restore the backup follow these steps:
-
Connect to openshift using the terminal/bash and set the project (test/prod).
-
Transfer the backup file to your local using the command below:
oc rsync <backup-pod-name>:/backups/daily/<date> <local-folder>
This copies the folder and contents from the pod to the local folder.
-
Extract backup script using gzip:
gzip -d <file-name>
-
Connect to the patroni database pod using port-forward:
oc port-forward met-patroni-<master_pod> 5432:5432
-
Manually create the database (drop if necessary):
psql -h localhost -p 5432 -U postgres -c 'create database app;'
-
Manually update with passwords and run the users setup script (if new server):
psql -h localhost -U postgres -p 5432 -a -q -f ./postgresql-user-setup.sql
-
Execute the script to restore the database:
psql -h localhost -d app -U postgres -p 5432 -a -q -f <path-to-file>
Note: Should the restore fail due to roles not being found, the following psql commands can be ran from within the database pod to alter the roles
alter role met WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOREPLICATION PASSWORD 'met'; alter role analytics WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOREPLICATION PASSWORD 'analytics'; alter role keycloak WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOREPLICATION PASSWORD 'keycloak'; alter role redash WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOREPLICATION PASSWORD 'redash'; alter role dagster WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOREPLICATION PASSWORD 'dagster';
Once the roles are altered the restore script can be ran again.
Create an instance of a postgresql database:
In each environment namespace (dev, test, prod) use the following:
Deploy the web application:
oc process -f ./keycloak.dc.yml -p ENV=<dev/test/prod> | oc create -f -
The create the initial credentials use port forwarding to access the url as localhost:8080
oc port-forward keycloak-<PODNAME> 8080:8080
In the keycloak app:
- create a new realm and click import json, select the file "keycloak-realm-export.json"
- Request a new client configuration in sso-requests (https://bcgov.github.io/sso-requests/)
- Update the identity provider client secret and url domains.
In each environment namespace (dev, test, prod) use the following IMAGE_TAG values of the following commands should also be changed to reflect the environment they will be installed to
Deploy the web application:
oc process -f ./web.dc.yml \
-p ENV=<dev/test/prod> \
-p IMAGE_TAG=<dev/test/prod> \
| oc create -f -
Deploy the api application:
oc process -f ./api.dc.yml \
-p ENV=<dev/test/prod> \
-p IMAGE_TAG=<dev/test/prod> \
-p KC_DOMAIN=met-oidc-test.apps.gold.devops.gov.bc.ca \
-p S3_BUCKET=met-test \
-p SITE_URL=https://met-web-test.apps.gold.devops.gov.bc.ca \
-p MET_ADMIN_CLIENT_SECRET=<SERVICE_ACCOUNT_SECRET> \
-p NOTIFICATIONS_EMAIL_ENDPOINT=https://met-notify-api-test.apps.gold.devops.gov.bc.ca/api/v1/notifications/email \
| oc create -f -
Deploy the notify api application:
oc process -f ./notify-api.dc.yml \
-p ENV=<dev/test/prod> \
-p IMAGE_TAG=<dev/test/prod> \
-p KC_DOMAIN=met-oidc-test.apps.gold.devops.gov.bc.ca \
-p GC_NOTIFY_API_KEY=<GC_NOTIFY_API_KEY> \
| oc create -f -
Deploy the cron job application:
oc process -f ./cron.dc.yml \
-p ENV=<dev/test/prod> \
-p IMAGE_TAG=<dev/test/prod> \
-p KC_DOMAIN=met-oidc-test.apps.gold.devops.gov.bc.ca \
-p SITE_URL=https://met-web-test.apps.gold.devops.gov.bc.ca \
-p MET_ADMIN_CLIENT_SECRET=<SERVICE_ACCOUNT_SECRET> \
-p NOTIFICATIONS_EMAIL_ENDPOINT=https://met-notify-api-test.apps.gold.devops.gov.bc.ca/api/v1/notifications/email \
| oc create -f -
Deploy the analytics api
oc process -f ./analytics-api.dc.yml \
-p ENV=<dev/test/prod> \
-p IMAGE_TAG=<dev/test/prod>
| oc create -f -
Deploy the redash analytics helm chart:
cd redash
helm dependency build
helm install met-analytics ./ -f ./values.yaml --set redash.image.tag=test
Allows the connections between pods whithin the realm (API pods can connect to the database pods):
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
environment: test
name: e903c2
policyTypes:
- Ingress
Allow public accecss to the created routes by creating the network policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-from-openshift-ingress
namespace: e903c2-<ENV>
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
policyTypes:
- Ingress