Example of creating and deploying an API with Docker and Terraform on AWS
- Install Terraform
- Install the AWS CLI
- Log into your
dev
account (withaws sso login
) - Ensure your account has a Terraform State S3 Backend deployed
-
Create a new repo using this template.
You need your own repo so that you can push changes and have GitHub Actions deploy them.
Keep your repo name relatively short. Since we're creating AWS resources based off the name, we've seen issues with repo names longer than about 24 characters.
-
Clone your new repo
git clone https://github.com/byu-oit/my-new-repo
- Check out the
dev
branch
cd my-new-repo
git checkout -b dev
- Find and replace across the repo:
- replace
977306314792
with yourdev
AWS account number - replace
539738229445
with yourprd
AWS account number - replace
hw-fargate-api
with the name of your repo - replace
byu-oit-terraform-dev
with the name of yourdev
AWS account - replace
byu-oit-terraform-prd
with the name of yourprd
AWS account - replace
Codepipeline-Standard-Change
with your Standard Change Template ID - If you need to create a new template, ask in the ServiceNow channel for help getting it into the sandbox ServiceNow environment
- replace
- Rename
.postman/hw-fargate-api.postman_collection.json
with the name of your repo replacinghw-fargate-api
in the filename - Add yourself (or your team) as a Dependabot reviewer in
dependabot.yml
- Enable Dependabot Security updates if you're outside the
byu-oit
GitHub organization - Commit your changes
git commit -am "Update template with repo specific details"
cd terraform-iac/dev/setup/
terraform init
terraform apply
In the AWS Console, see if you can find the resources from setup.tf
(ECR, SSM Param, IAM Role).
- Create an Incoming Webhook in Teams, following these instructions, and copy the URL
- Create a GitHub secret
MS_TEAMS_WEBHOOK_URL
using the copied URL (e.g. at https://github.com/byu-oit/hw-fargate-api/settings/secrets/actions/new)
git push -u origin dev
If you look at .github/workflows/deploy.yml
, you'll see that it is set up to run on pushes to the dev
branch. Because you pushed to the dev
branch, this workflow should be running now.
- In GitHub, click on the workflow run (it has the same name as the last commit message you pushed)
- Click on the
Build and Deploy
job - Expand any of the steps to see what they are doing
Anytime after the Terraform Apply
step succeeds:
cd ../app/
terraform init
terraform output
This will output a DNS Name. Enter this in a browser. It will probably return 503 Service Unavailable
. It takes some time for the ECS Tasks to spin up and for the ALB to recognize that they are healthy.
In the AWS Console, see if you can find the ECS Service and see the state of its ECS Tasks. Also see if you can find the ALB Target Group, and notice when Tasks are added to it.
Note
While Terraform creates the ECS Service, it doesn't actually spin up any ECS Tasks. This isn't Terraform's job. The ECS Service is responsible for ensuring that ECS Tasks are running.
Because of this, if the ECS Tasks fail to launch (due to bugs in the code causing the docker container to crash, for example), Terraform won't know anything about that. From Terraform's perspective, the deployment was successful.
These type of issues can often be tracked down by finding the Stopped ECS Tasks in the ECS Console, and looking at their logs or their container status.
Once the Tasks are running, you should be able to hit the app's URL and get a JSON response. Between index.js
and main.tf
, can you find what pieces are necessary to make this data available to the app?
In the AWS Console, see if you can find the other resources from main.tf
.
Make a small change to index.js
(try adding a console.log
, a simple key/value pair to the JSON response, or a new path). Commit and push this change to the dev
branch.
git commit -am "try deploying a change"
git push
In GitHub Actions, watch the deploy steps run (you have a new push, so you'll have to go back and select the new workflow run instance and the job again). Once it gets to the CodeDeploy step, you can watch the deploy happen in the CodeDeploy console in AWS. Once CodeDeploy says that production traffic has been switched over, hit your application in the browser and see if your change worked. If the service is broken, look at the stopped ECS Tasks in the ECS Console to see if you can figure out why.
Note
It's always best to test your changes locally before pushing to GitHub and AWS. Testing locally will significantly increase your productivity as you won't be constantly waiting for GitHub Actions and CodeDeploy to deploy, just to discover bugs.
You can either test locally inside Docker, or with Node directly on your computer. Whichever method you choose, you'll have to setup the environment variables that ECS makes available to your code when it runs in AWS. You can find these environment variables in
index.js
andmain.tf
.
By digging through the .tf
files, you'll see what resources are being created. You should spend some time searching through the AWS Console for each of these resources. The goal is to start making connections between the Terraform syntax and the actual AWS resources that are created.
Several OIT created Terraform modules are used. You can look these modules up in our GitHub Organization. There you can see what resources each of these modules creates. You can look those up in the AWS Console too.
By default, we build and deploy on ARM-based processors to save ~20% on our compute costs.
There are a lot of moving parts in the CI/CD pipeline for this project. This diagram shows the interaction between various services during a deployment.