CI/CD tooling for the Lookit project.
We use Google Cloud Platform heavily for Lookit - storing Ember apps as web archives in GCS, keeping user data in a Google-managed Postgres DB with Cloud SQL, deploying the application components in Google-managed Kubernetes clusters using GKE, GCR as a Docker image repository, etc. We want to automate the management of all these cloud properties, and provide developers of the Lookit platform a "One Stop Shop" for deployment needs.
We are using a Kustomize-based workflow and adhering to the suggested file hierarchy. The execution pathway is standard containerized GitOps - the "orchestrator" image/container is built and loaded into GCR, where it can be fetched by a Cloud Builder at a later time.
When commits are published to the master branch for this repo:
Google Cloud Build will create a Cloud SDK-based image that copies all the deployment manifests
from kubernetes/lookit, as well as the deploy.sh
script located at the root of this project.
This image is loaded into Google Container Registry with the tag latest
.
When commits are published to the develop or master branch for lookit-api: Google Cloud build will run tests before pushing a valid lookit-api image to Google Container Registry. It will then pull the orchestrator image from GCR and run it, leveraging build variable substitution to parameterize the build with Github-supplied metadata (namely, commit SHA, tag, and branch name) in the form of environment variables.
The deploy script will use these environment variables to template the kustomize manifests (using envsubst
)
and choose the target cluster.
In true GitOps spirit, we check our secrets into source control. How is this done securely? By checking in encrypted keys that are managed separately from the rest of the cloud resources.
Our deploy script invokes gcloud's GKMS module to decrypt secrets that are encoded beforehand and checked in by one person only - the key project owner. The cryptographic keys for lookit-orchestrator are stored in a separate project that is locked down to everyone except the key project owner; ensuring that high-level access to cloud resources in the mit-lookit project do not accidentally grant permissions to developers to manipulate crypto keys.
-
The variable replacement system in Kustomize is somewhat convoluted, but the important thing to know is that ConfigMap values must be defined in the base
kustomizeconfig.yaml
as avarReference
before they are referenced inkustomization.yaml
. -
add-lookit-env-vars.yaml
has a fragile patching system right now - because we can't target container elements by name, we have to have injections like such:- op: add path: /spec/template/spec/containers/0/env/- value: name: ENVIRONMENT valueFrom: configMapKeyRef: key: ENVIRONMENT name: lookit-configmap
here, the path has a hardcoded zero to indicate the first array. This is obviously not optimal; I'm open to better solutions.
-
Why the custom transformer for labels, rather than using commonLabels out of the box? In short, we can't apply labels globally like
commonLabels
would if we are to implement version-specific labels (likeapp.kubernetes.io/version
, recommended by the Kubernetes Working Group).kubectl
prevents you from attempting to change thematchLabels
clause of an existing Deployment or StatefulSet so that pods can be accurately selected for rolling deployments.
In short: long term support and underlying philosophy.
Regarding the first point, Kustomize is built into kubectl as of 1.14 and is officially supported by the Cloud Native Computing Foundation as such.
Even though Helm is also supported by the CNCF, its goals as a project remain straddled over templating
system, configuration management, and pseudo-package manager for Kubernetes. It's an extremely powerful tool
in that regard, but sometimes too much power is a bad thing. With Lookit, deployments to a given cluster
should be entirely stable in terms of configuration; the only thing that should change is lookit-api's
image version and the associated labels. Helm's templating system seems to encourage the conflation of
static per-environment configuration with dynamic per-deployment build variables. Kustomize, on the
other hand, lends itself neatly to the configuration-as-code discipline of DevOps with per-environment
overrides. All that remains is to substitute build variables, which is a simple task accomplished with
envsubst
and simple file templating.
-
Make sure you have git installed (instructions here)
- If you're on a Mac and have
brew
, you can install withbrew install git
.
- If you're on a Mac and have
-
Make sure you have the gcloud SDK installed (instructions here). This will allow you to authenticate with Lookit's Google Cloud Platform project.
- If you're on a Mac and have
brew
, you can install withbrew install gcloud
.
- If you're on a Mac and have
-
Make sure you have kubectl.
- Run
gcloud components install kubectl
if you don't have it already.
- Run
-
Ask Rico Rodriguez ([email protected]) for access to the MIT Lookit project.
- Clone this project.
- Run
gcloud init
orgcloud config
per the instructions in the gcloud kubectl setup documentation. - Run the
deploy.sh
script with a bash version greater than 4.0 (the script uses associative arrays, which are a relatively recent feature of bash).- make sure to set the
COMMIT_SHA
,SHORT_SHA
,REPO_NAME
,BRANCH_NAME
, andTAG_NAME
variables prior to executing the script -COMMIT_SHA
andSHORT_SHA
referring to the commit you wish to deploy.
- make sure to set the
bash -c "COMMIT_SHA=f5c855f8e64fa878e62f63a6297a24f6dfc07033 \
SHORT_SHA=f5c855f \
REPO_NAME=lookit-api \
BRANCH_NAME=develop \
TAG_NAME=latest \
. deploy.sh"
Through the Google Cloud Build/Github integration, any commits to the lookit-api codebase on the master or develop branches will automatically trigger a build with the correct tag, commit SHA, and branch.
- Rico Rodriguez (Datamance)
See also the list of contributors who participated in this project.
This project is licensed under the MIT License - see the LICENSE.md file for details