This Terraform module builds the base AWS infrastructure needed to support ECS Services. This module implements the architecture described below .
The module will by default build an Auto Scaling Group in private subnets. VPC Endpoints create links to AWS services that will allow the ECS Services to run, as well as accessing ECR repositories.
By default the ec2_keypair
input is null, and no Security Group rules allow for SSH access. EC2 instances can be accessed using AWS Session Manager.
In order to provide a running ECS Service, a child module will need to build it's own ECS Service and ECS Task Definition objects.
A child module will also need to create its own Load Balancer Listener and Load Balancer Target Group. The child module will need an aws_autoscaling_attachment
resource to connect the target group to the Autoscaling Group created by this module. The attachment allows the Autoscaling Group to automatically register EC2 Instances with the target group.
To allow connectivity from the Internet, this module has been designed to operate through AWS CloudFront. This has not been implemented in this module as several CloudFront distributions may exist for the same ECS base architecture. The output alb_dns_name
from this module can be used in the domain_name
and origin_id
arguments of a custom origin
block in a aws_cloudfront_distribution
resource in Terraform: this will direct requests from the CloudFront distribution to the public Application Load Balancer created by this module. This module also outputs a value waf_acl_arn
that may be passed into the web_acl_id
argument of a aws_cloudfront_distribution
resource to protect the CloudFront distribution.
No requirements.
Name | Version |
---|---|
aws | 5.50.0 |
aws.us-east-1 | 5.50.0 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
acm_certificate_arn | ARN of an existing certificate in Amazon Certificate Manager | string |
null |
no |
acm_create_certificate | Whether to create a certificate in Amazon Certificate Manager | bool |
true |
no |
alb_access_logs_bucket | Name of the S3 Bucket for ALB access logs | string |
"" |
no |
alb_access_logs_enabled | Whether to enable access logging for the ALB | bool |
false |
no |
alb_access_logs_prefix | Prefix for objects in S3 bucket for ALB access logs | string |
"" |
no |
alb_enable_deletion_protection | Whether to enable deletion protection for the ALB | bool |
true |
no |
alb_idle_timeout | Idle timeout for load balancer | string |
"60" |
no |
alb_internal | Whether the ALB should be internal (not public facing) | bool |
false |
no |
alb_listener_fixed_response_content_type | Default content type for the fixed response of the default ALB Listener | string |
"text/html" |
no |
alb_listener_fixed_response_message_body | Default message body for the fixed response of the default ALB Listener | string |
"<!DOCTYPE html><body><h1>Hello World!</h1></body>" |
no |
alb_listener_fixed_response_status_code | Default status code for the fixed response of the default ALB Listener | string |
"200" |
no |
alb_listener_ssl_policy | TLS security policy used by the default ALB Listener | string |
"ELBSecurityPolicy-TLS13-1-2-2021-06" |
no |
ami_architecture | Name of the OS Architecture. Note must be compatible with the selected EC2 Instance Type | string |
"x86_64" |
no |
ami_name_prefix | Prefix used to find an AMI for use in the Launch Template | string |
"amzn2-ami-ecs-hvm-2.0*" |
no |
asg_default_cooldown | Number of seconds between scaling activities | number |
300 |
no |
asg_desired_capacity | Desired number of instances in the Autoscaling Group | number |
1 |
no |
asg_enabled_metrics | List of metrics enabled for the Auotscaling Group | list(string) |
[ |
no |
asg_health_check_grace_period | Grace Period before health checks are enabled. ECS Services can take 10 minutes to stabilise | number |
600 |
no |
asg_health_check_type | Type of health check for the Autoscaling Group. Can be EC2 or ELB | string |
"EC2" |
no |
asg_max_size | Maximum number of instances in the Autoscaling Group | number |
1 |
no |
asg_metrics_granularity | Granularity of metrics collected by the Autoscaling Group | string |
"1Minute" |
no |
asg_min_size | Minimum number of instances in the Autoscaling Group | number |
1 |
no |
asg_protect_from_scale_in | Whether newly launched instances are automatically protected from termination | bool |
true |
no |
asg_termination_policies | Termination Policies used by the Autoscaling Group | list(string) |
[ |
no |
cloudwatch_log_group | Name of the cloudwatch log group | string |
n/a | yes |
ec2_additional_userdata | Additional userdata to append to the launch template configuration | string |
"" |
no |
ec2_ebs_volume_type | Volume type used in EBS volumes | string |
"gp3" |
no |
ec2_instance_type | EC2 Instance type used by EC2 Instances | string |
"t3.small" |
no |
ec2_keypair | Name of EC2 Keypair for SSH access to EC2 instances | string |
null |
no |
ecs_capacity_provider_managed_termination_protection | Enables or disables container-aware termination of instances in the ASG when scale-in happens | string |
"ENABLED" |
no |
ecs_capacity_provider_status | Enables or disables managed scaling on ASG | string |
"ENABLED" |
no |
ecs_capacity_provider_target_capacity_percent | Percentage target capacity utilization for the autscaling group instances | number |
100 |
no |
name_prefix | Name prefix of the ECS Cluster and associated resources | string |
n/a | yes |
route53_delegation_set_id | The ID of the reusable delegation set whose NS records should be assigned to the hosted zone | string |
null |
no |
route53_zone_domain_name | Name of the Domain Name used by the Route 53 Zone. Trailing dots are ignored | string |
null |
no |
route53_zone_force_destroy | Whether to destroy the Route 53 Zone although records may still exist | bool |
false |
no |
route53_zone_id_existing | ID of an existing Route 53 Hosted zone as an alternative to creating a hosted zone | string |
null |
no |
s3_bucket_force_destroy | Whether to allow a non-empty bucket to be destroyed | bool |
false |
no |
s3_bucket_versioning_enabled | Whether to enable S3 bucket versioning | bool |
true |
no |
tags | Map of tags for adding to resources | map(string) |
{} |
no |
vpc_cidr_block | CIDR block for the VPC | string |
"10.0.0.0/16" |
no |
vpc_endpoint_dns_record_ip_type | The DNS records created for the endpoint | string |
"ipv4" |
no |
vpc_endpoint_services | List of services to create VPC Endpoints for | list(string) |
[ |
no |
vpc_endpoints_create | Whether to use VPC Endpoints to access AWS services inside the VPC. Note this can have a cost impact | bool |
false |
no |
vpc_peering_vpc_ids | List of VPC IDS for peering with the VPC | list(string) |
[] |
no |
vpc_public_subnet_public_ip | Whether to automatically assign public IP addresses in the public subnets | bool |
false |
no |
waf_ip_set_addresses | List of IPs for WAF IP Set Safelist | list(string) |
[ |
no |
waf_use_ip_restrictions | Whether to use IP range restrictions on the default WAF | bool |
false |
no |
Name | Description |
---|---|
alb_arn | ARN of the Application Load Balancer |
alb_dns_name | DNS Name of the Application Load Balancer |
alb_https_listener_arn | ARN of the default Application Load Balancer Listener on port 443 |
alb_security_group_id | ID of the Security Group for the Application Load Balancer |
asg_name | Name of the Auto Scaling Group |
asg_security_group_id | ID of the Security Group for the Auto Scaling Group |
cloudwatch_log_group_arn | ARN of the CloudWatch Log Group |
cloudwatch_log_group_name | Name of the CloudWatch Log Group |
ecs_capacity_provider_name | Name of the ECS Capacity Provider associated with the Autoscaling Group |
ecs_cluster_arn | ARN of the ECS Cluster |
ecs_cluster_name | Name of the ECS Cluster |
route53_public_hosted_zone | Zone ID of the Route 53 Public Hosted Zone |
s3_bucket | Name of the S3 Bucket |
s3_bucket_arn | ARN of the S3 Bucket |
vpc_egress_security_group_id | ID of the Security Group for general egress |
vpc_endpoint_security_group_id | ID of the Security Group for VPC Endpoints |
vpc_id | VPC ID |
vpc_private_subnet_ids | Private Subnet IDs |
vpc_public_subnet_ids | Public Subnet IDs |
waf_acl_arn | ARN of the WAF Web ACL |
The name of the Route 53 Hosted Zone needs to match the value of a registered domain name. There is an option to manage an AWS registered domain in Terraform, but we feel it is best to avoid unintended to changes to the domain and this has been intentionally omitted from the configuration.
This module is able to optionally build a Route 53 hosted zone, or to look up an existing hosted zone using the input route53_zone_id_existing
.
In AWS a Registered Domain specifies Name Servers which are used to resolve DNS queries for addresses in the domain. These can be set to the name servers created by a Route 53 Hosted Zone. However, given the hosted zone is managed by Terraform and may be replaced or destroyed, particularly in a sandbox environment, this could lead to frequent changes to the registered domain name servers. These would need to be made manually.
To avoid this, it is possible to create a Delegation Set to act as a bridge between the registered domain and the Route 53 Hosted Zone. To avoid this also changing frequently, it should be created outside Terraform. This can be done with the AWS CLI.
aws route53 create-reusable-delegation-set --caller-reference "$(date +"%s")"
This will output an Id attribute in the format "/delegationset/N10230772EN8U28YG7Z00". The second part of this ID (the unique reference) can be passed to the route53_delegation_set_id input for this module. If a Route 53 Hosted Zone is created by this module it is able to use the name servers specified in the delegation set. When the registered domain is updated to use the name servers listed in the output for the command above, this will allow the hosted zone to be changed without needing to update the name server details again.
Note that it is not possible to have two Route 53 hosted zones using the same domain name and the same name servers. A delegation set can only be used with a set of unique domain names.
The Auto Scaling Group (ASG) can have a health check type of either EC2
or ELB
. In this module the input asg_health_check_type
is used to control this setting. The EC2 health check type checks the health of the EC2 instances connected to the ASG. This is a simple check determining if the instances are in a running
state. The ELB health check uses health checks configured on the Elastic Load Balancer (ELB) attached to the ASG. It is not possible however to attach an Application Load Balancer (ALB) directly to the ASG. Target Groups are attached to the ASG, and thus will be done at the service level. The Target Group health checks are used by the ELB health checks configured on the ASG.
Using the ELB health check type has some implications:
- Initially, it this module is built without any dependent service there will be no way of fulfilling the ASG health check as there will be no target groups. This will lead to continuous instance cycling.
- Where multiple dependent services have been built on top of the same implementation of this module, the failure of one of these services will fail them all. This will impact otherwise healthy services.
The default value for the asg_health_check_type
input has therefore been set to EC2
. For implementations with a small number of stable services the value of ELB
may be preferred as this provides a truer reflection of service health.
AWS does not permit multiple listeners on the same Load Balancer using the same port. This could mean that only one target was available on the default HTTPs port 443. A solution to this is to generate a "default" load balancer listener assigned to port 443. The output alb_https_listener_arn
allows dependent modules to build their own aws_lb_listener_rule
resources referring to the ARN of the listener. This allows a single port to front several targets, using rules to determine the correct target.
An implementation of this could use the host_header condition to route requests using the value of the Host header sent by the client (note this header will be automatically inserted by most HTTP user agents such as curl). For example:
resource "aws_lb_listener_rule" "this" {
listener_arn = var.alb_listener_arn
priority = var.alb_listener_rule_priority
action {
type = "forward"
target_group_arn = aws_lb_target_group.this.arn
}
condition {
host_header {
values = [aws_route53_record.this.name]
}
}
}
The certificate aws_acm_certificate.default
in this module has no corresponding Route 53 A record - therefore requests to the domain name of the certificate will fail. This is by design. It is not possible to create a Load Balancer listener using the HTTPs protocol without referring to a certificate, and the resource here allows this. The domain name of the default certificate is not output by this module as it is not intended for re-use.
The Autoscaling Group used by ECS services is deployed to private subnets. By default a Security Group rule will be created allowing outbound access to the Internet via the NAT Gateway.
It is possible to deploy services with the input variable vpc_endpoints_create
set to true
. This will allow services to access AWS API endpoints using local interfaces inside the VPC. The Route 53 Resolver will resolve AWS API addresses to local IPv4 addresses. This approach should improve the security of deployments by limiting external resources that container services have access to. The list of VPC endpoints can be changed by overriding the input vpc_endpoint_services
.
There is a cost impact of using VPC endpoints as there is a standing charge for each endpoint, whether they are used or not. For this reason the input vpc_endpoints_create
defaults to false
. When the default value is unchanged, outbound acess will be allowed via the NAT Gateway.
In versions 1.x.x of this module VPC endpoints were built by default; this changes from version 2.0.0 onwards.
When pushing a commit to GitHub, or raising a Pull Request, a GitHub workflow will automatically run commitlint
. This makes use of the Node.js module https://commitlint.js.org. The workflow has been configured to use the Conventional Commits specification https://www.conventionalcommits.org/en/v1.0.0/.
When commits are formatted using a canonical format such as Conventional Commits these can be used in the release process to determine the version number. The commit history can also be used to generate a CHANGELOG.md
For local development it is recommended to use a commit-msg
Git hook. The following code should be placed in a file .git/hooks/commit-msg
and made executable:
#!/bin/sh
if command -v commitlint &> /dev/null
then
echo $1 | commitlint
fi
This is dependent on the commitlint
tool, which can be installed using npm install -g @commitlint/{cli,config-conventional}
(for a global installation). When working correctly the hook should fire whenever a commit is attempted, e.g.
git commit -m "silly message"
⧗ input: .git/COMMIT_EDITMSG
✖ subject may not be empty [subject-empty]
✖ type may not be empty [type-empty]
✖ found 2 problems, 0 warnings
ⓘ Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint
Configuration for the commitlint tool is located in the .commitlintrc.mjs
file in the root of the project. This is also used by the commit-msg
hook.
On a push to the main
branch, i.e. after a Pull Request has been approved and merged, a GitHub workflow will run Semantic Release. This will initiate a chain of actions that will automatically handle versioning, GitHub releases and Changelog generation.
The commit analyzer bundled with Semantic Release follows the same conventionalcommits
schema as is used in the commit linting.
The semantic release tooling is configured in a file .releaserc.json
in the root of the project.
When pushing a commit to GitHub, or raising a Pull Request, a GitHub workflow will automatically run terraform fmt -check -recursive
in the root of the project. If this produces a non-zero exit code, the job will fail.
Terraform fmt is an "intentionally opinionated" command to rewrite configuration files to a recommended format. Any errors detected by the check can easily be remedied with by running the command terraform fmt -recursive
which will automatically change all terraform code in the project.
For local development it is recommended to use a pre-commit
hook to detect formatting issues before they are committed. Place the text below in a file .git/hooks/pre-commit
and make this executable:
#!/bin/sh
# Short-circuit if terraform not found
if ! command -v terraform &> /dev/null
then
echo "Terraform executable was not found in $PATH"
exit 1
fi
FORMAT_CHECK=$(terraform fmt -check -recursive 2>&1)
FORMAT_RC=$?
if echo $FORMAT_CHECK | grep -q "Error"
then
# Iterate over lines of $FORMAT_CHECK
while IFS= read -r f; do
echo "$f"
done <<< "$FORMAT_CHECK"
exit $FORMAT_RC
elif [ $FORMAT_RC -gt 0 ]
then
printf "\033[1;31mThe following files need to be formatted:\033[m\n"
for f in $FORMAT_CHECK; do
echo $f
done
printf "Run \033[1;32mterraform fmt -recursive\033[m to fix\n"
exit "$FORMAT_RC"
fi
When working correctly, the git hook will produce output when staged files are commited, e.g.:
$ git commit
The following files need to be formatted:
main.tf
modules/grault/variables.tf
Run terraform fmt -recursive to fix