It's the usage ofterraform workspace
commands.
- One backend for all environments, so no isolation between environments.
- One version for all environments, so no immutable infrastructure.
- You cannot know the number of workspaces (or environments) just by reading the code.
- Brings confusion in which workspace the
terraform apply
command is performed, and so, can induce mistaking prod and non-prod environments. - You're adding options to your command line.
A solution can be to have some code duplication for each environment.
It consists in wrapping functions in functions in order to parse inputs.
Example
locals {
read_replica_private_ip = var.enable_ha ? lookup(zipmap([for addr in module.postgres.replicas_instance_first_ip_addresses.0 : values(addr).2],
[for addr in module.postgres.replicas_instance_first_ip_addresses.0 : values(addr).0]), "PRIVATE", null) : ""
read_replica_public_ip = var.enable_ha ? lookup(zipmap([for addr in module.postgres.replicas_instance_first_ip_addresses.0 : values(addr).2],
[for addr in module.postgres.replicas_instance_first_ip_addresses.0 : values(addr).0]), "PRIMARY", null) : ""
}
It is very tempting to write one-liners that perfectly parses maps or lists to other wanted structures. But, wraping functions in functions brings complexity that will be painfull to understand while maintaining code. It quickly becomes technical debt.
A solution can be to same intermediate values of the parsed input, so that we split the complexity. Another one would be to rely on outputs from subsequent modules or resources.
Shell scripts or other files within layers or modules aka folders with .tf
files are just trash.
Please store them in another dedicated folder or, even better, another repo.
- You may rely on scripts to apply and forget about how init and apply are done.
- They'll be duplicated when you duplicate the folder with copy-paste, creating more junk.
This is about drawing the line between Providers and Provisionners.
Providers like terraform are used for (but not limited to):
- create infrastructure parts
- define network rules
- handle IAM
Provisionners like Ansible are used for (but not limited to):
- deploy applications
- change configurations
So, in terrafom, don't use the following providers: Kubernetes; helm.
- Loose idempotence.
We prefer to use locals that do not require variable declarations to be used.
So instead of this
# dev.tfvars
env = dev
project = my_project
# main.tf
variable "env" {}
variable "project" {}
resource "null" "this" {
project = var.project
env = var.env
}
We have this
# main.tf
locals {
env = dev
project = my_project
}
resource "null" "this" {
project = local.project
env = local.env
}
These are projects that look like this (every terraform files at the root of the repository).
.
├── README.md
├── backend.tf
├── network.tf
├── iam.tf
├── app1_rds.tf
├── app1_S3.tf
└── provider.tf
- Collaboration is deeply impacted since we have a mono state file.
terraform plan
will take longer and longer each time we add a resource.- The blast radius is important since all resources are close to each other.
This are projects that look like this (we store files in a tree arborescence with a depth of 1).
.
├── README.md
└── cloudrun
├── README.md
├── backend.tf
├── main.tf
├── provider.tf
├── tfvars
│ ├── integration.tfvars
│ ├── production.tfvars
│ ├── staging.tfvars
│ └── testing.tfvars
└── variables.tf
This type of repo brings flexibity alongside collaboration improvments.
- Too much layers can cause a higher maintainability cost.
- We cannot easily use outputs of one layer in another.
- Some non-selectives CI may run
terraform plan
over all layers.
When you create a terraform module, you may be tempted to add a provider
block in your code. Using this pattern, you will have a structure looking like this.
.
├── environment
│ ├── core.tf
│ └── _settings.tf
└── modules
└── core
├── resource_group.tf
└── _settings.tf
module/core
# _settings.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.89"
}
}
}
provider {}
# resource_group.tf
resource "azurerm_resource_group" "example" {
name = "example"
location = "west europe"
}
environment
# _settings.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.89"
}
}
}
provider {}
# core.tf
module "core" {
source = "../modules/core"
}
Your module will be easier to test as a standalone and you won't need any other configuration.
Terraform manages its code, structure and state very specificly. Each resource will attempt to match its neerest provider
to instanciate itself and the resource will keep it reference provider.
If you use a module having such a block in its code, when you will want to move or destroy it, terraform will have conflict because it will not find this reference anymore and won't be able to reconstruct its state.
Prefer to declare your provider
blocks in your layers.
More informations in providers within modules documentation.
The type any
is a special type that can be used in variables. The value of a variable with type any
can be of any type supported by terraform.
variable "my_variable" {
type = any
}
or
variable "my_other_variable" {
type = object({
special_property = any
})
}
Using type any
in variables saves time during development because you don't have to be specific about the type of value you want to use.
Using type any
outside of development is a bad practice for two reasons :
- The users of your module will not know what type of value they should use. They will have to guess it by looking at your code.
- Your code will be subject to errors at runtime if the consumers of your
any
typed variable try to use it in a way that the user didn't expect.
Using type any
during development is fine, but you should always replace it with the correct expected type before using your module in production.