This directory holds the terraform modules for maintaining your complete persistent infrastructure.
Prerequisite: install the jq
JSON processor: brew install jq
- Verify any hardcoded references to Cloud.gov spaces in this directory are correct
- Manually deploy the Git gateway app into your cloud.gov space
- Manually run the bootstrap module following instructions under
Terraform State Credentials
- Setup CI/CD Pipeline to run Terraform
- Copy bootstrap credentials to your CI/CD secrets using the instructions in the base README
- Create a cloud.gov SpaceDeployer by following the instructions under
SpaceDeployers
- Copy SpaceDeployer credentials to your CI/CD secrets using the instructions in the base README
- Manually Running Terraform
- Follow instructions under
Set up a new environment
to create your infrastructure
The bootstrap module is used to create an s3 bucket for later terraform runs to store their state in.
From the bootstrap
directory:
- Run
terraform init
- Run
./run.sh plan
to verify that the changes are what you expect - Run
./run.sh apply
to set up the bucket and retrieve credentials - Follow instructions under
Use bootstrap credentials
- Ensure that
import.sh
includes a line and correct IDs for any resources created - Run
./teardown_creds.sh
to remove the space deployer account used to create the s3 bucket
This should not be necessary in most cases
- Run
terraform init
- If you don't have terraform state locally:
- run
./import.sh
- optionally run
./run.sh apply
to include the existing outputs in the state file
- run
- Make your changes
- Continue from step 2 of the boostrapping instructions
- Run
./run.sh show
- Follow instructions under
Use bootstrap credentials
-
Add the following to
~/.aws/credentials
[nih_oite_experiments-terraform-backend] aws_access_key_id = <access_key_id from bucket_credentials> aws_secret_access_key = <secret_access_key from bucket_credentials>
-
Copy
bucket
frombucket_credentials
output to the backend block ofstaging/providers.tf
andproduction/providers.tf
A SpaceDeployer account is required to run terraform or deploy the application from the CI/CD pipeline. Create a new account by running:
./create_space_deployer.sh <SPACE_NAME> <ACCOUNT_NAME>
The below steps rely on you first configuring access to the Terraform state in s3 as described in Terraform State Credentials.
-
cd
to the environment you are working in -
Set up a SpaceDeployer
# create a space deployer service instance that can log in with just a username and password # the value of < SPACE_NAME > should be `staging` or `prod` depending on where you are working # the value for < ACCOUNT_NAME > can be anything, although we recommend # something that communicates the purpose of the deployer # for example: circleci-deployer for the credentials CircleCI uses to # deploy the application or <your_name>-terraform for credentials to run terraform manually ../create_space_deployer.sh <SPACE_NAME> <ACCOUNT_NAME> > secrets.auto.tfvars
The script will output the
username
(ascf_user
) andpassword
(ascf_password
) for your<ACCOUNT_NAME>
. Read more in the cloud.gov service account documentation.The easiest way to use this script is to redirect the output directly to the
secrets.auto.tfvars
file it needs to be used in -
Run terraform from your new environment directory with
terraform init terraform plan
-
Apply changes with
terraform apply
. -
Remove the space deployer service instance if it doesn't need to be used again, such as when manually running terraform once.
# <SPACE_NAME> and <ACCOUNT_NAME> have the same values as used above. ../destroy_space_deployer.sh <SPACE_NAME> <ACCOUNT_NAME>
Each environment has its own module, which relies on a shared module for everything except the providers code and environment specific variables and settings.
- bootstrap/
|- main.tf
|- providers.tf
|- variables.tf
|- run.sh
|- teardown_creds.sh
|- import.sh
- <env>/
|- main.tf
|- providers.tf
|- secrets.auto.tfvars
|- .force-action-apply
|- variables.tf
- shared/
|- s3/
|- main.tf
|- providers.tf
|- variables.tf
|- database/
|- main.tf
|- providers.tf
|- variables.tf
|- domain/
|- main.tf
|- providers.tf
|- variables.tf
In the shared modules:
providers.tf
contains set up instructions for Terraform about Cloud Foundry and AWSmain.tf
sets up the data and resources the application relies onvariables.tf
lists the required variables and applicable default values
In the environment-specific modules:
providers.tf
lists the required providersmain.tf
calls the shared Terraform code, but this is also a place where you can add any other services, resources, etc, which you would like to set up for that environmentvariables.tf
lists the variables that will be needed, either to pass through to the child module or for use in this modulesecrets.auto.tfvars
is a file which contains the information about the service-key and other secrets that should not be shared.force-action-apply
is a file that can be updated to force GitHub Actions to runterraform apply
during the deploy phase
In the bootstrap module:
providers.tf
lists the required providersmain.tf
sets up s3 bucket to be shared across all environments. It lives inprod
to communicate that it should not be deletedvariables.tf
lists the variables that will be needed. Most values are hard-coded in this modulerun.sh
Helper script to set up a space deployer and run terraform. The terraform action (show
/plan
/apply
/destroy
) is passed as an argumentteardown_creds.sh
Helper script to remove the space deployer setup as part ofrun.sh
import.sh
Helper script to create a new local state file in case terraform changes are needed