Automated deployment and management of Docker containers in a homelab environment using Ansible. This repository contains scripts, playbooks, and templates for maintaining a consistent and reproducible infrastructure.
# Clone the repository
git clone https://github.com/username/ansible-homelab.git
cd ansible-homelab
# Install dependencies
ansible-galaxy install -r requirements.yml
# Deploy a service (example: postgres)
make deploy-postgres
- Deployment Options
- Makefile Usage
- Direct Playbook Invocation
- Vault Password Management
- Utility Scripts
- Makefile Utilities
- Container Creation Script
- Container Removal Script
- System Overview
- Directory Structure Overview
- File Organization
-
- Service Configuration
- Directory Configuration
- Template Configuration
-
- Deployment Control
- Lifecycle Hooks
- Either use the
Makefile
- Convention is
make deploy-{name}
- Convention is
- Invoke playbook directly
ansible-playbook playbooks/containers/gitea.yml
For some playbooks, you may need to decrypt vault to access secrets or ensure remote user is root. To avoid asking for your password, there are defaults configured to load secrets:
Inside ansible.cfg
:
vault_password_file = credentials/.vault_pass
become_password_file = credentials/.become_pass
These will auto load when Ansible asks for your passwords. (Don't expose these). Without these files, you can deploy manually like:
ansible-playbook -i inventory/production playbooks/deploy-postgres.yml --ask-vault-pass -K
-
- Utility scripts I've written to manage deployments (i.e
make deploy-postgres
).
- Utility scripts I've written to manage deployments (i.e
-
- Handles creating all the files & directories for templating out the deployment of a new Docker image
-
- Handles easy reversal of the above command
To avoid affecting your actual nodes, I've enabled local development using Multipass Linux VMs. (See Makefile for more info).
make multipass-create
- Once created, grab the ips with
multipass list
and update the hosts file fordev
- Once created, grab the ips with
The following are not needed for development:
# vault_password_file = credentials/.vault_pass
# become_password_file = credentials/.become_pass
This template provides a standardized way to deploy Docker-based services with health checking, directory management, and configuration templating. It includes pre-deploy, post-deploy, and post-healthy hooks for customization.
For a given deployment like Grafana
, we can simplify the files needed by running ./scripts/create_container.sh
. From here, a series of files with get generated:
Playbooks
define variables and roles to run a set of tasksRoles
define reusable sets of tasksTemplates
define some Jinja templated text that are interpolated throughvars
Vars
contain service specific variables & config to inject into templates. Examples could include things like docker-compose.yml templates. Jinja will process expressions and vars before copying the output file.Files
contain static assets that can be easily copied into different nodes or containers. These files are unchanged and unprocessed when copied.Tools
are just playbooks for software installed on the node such as Docker.
The group_vars dir allows us to define global variables to share across all playbooks/roles/variables.
Example:
Env variable on host will be assigned to home_dir
and fallback to default if undefined.
home_dir: "{{ lookup('env', 'REMOTE_HOME_DIR', default='/home/schachte') }}"
The playbook can be thought of as the executable for a particular deployment. It's intentionally minimal and delegates to other files to promote better reusability within the codebase.
For Docker, the important thing is role: docker
. This will automatically locate the roles/docker dir and interpolate any variables into the deploy.yml. You can override all sorts of variables within this block and define lifecycle hooks as well.
The roles directory is to define reusable tasks that can be used between playbooks. For example, we have a Docker
role for handling docker deployments very generically. These roles
A typical Ansible role may look like:
rolename/
βββ defaults/ # Default variables (lowest precedence)
β βββ main.yml
βββ files/ # Static files to be transferred
β βββ file.conf
βββ handlers/ # Handler definitions
β βββ main.yml
βββ meta/ # Role metadata and dependencies
β βββ main.yml
βββ tasks/ # Core logic/tasks
β βββ main.yml
βββ templates/ # Jinja2 templates
β βββ template.j2
βββ tests/ # Role tests
β βββ inventory
β βββ test.yml
βββ vars/ # Role variables (high precedence)
βββ main.yml
The templates directory is really anything that you'd like to parameterize for the role or playbook. These are written using the Jinja
templating language.
We can add variables into regular files and Ansible with interpolate our variables into the file. This is useful for doing things like specifying a version for a Docker image using a variable or setting different paths for your volumes.
The vars dir contains all the values and defaults we define for a particular role. These are defined in multiple places and the precedence evaluation looks like:
1. Extra vars (`ansible-playbook -e "var=value"`)
2. Task vars (only for the task)
3. Block vars (only for tasks in block)
4. Role and include vars
5. Set_facts / registered vars
Check out vars/postgres/main.yml to see how we define the docker config block and various templates.
βοΈ Defined in ./vars
service_config
: (dictionary) Core service configuration- Will be output to CLI if
debug
is true (good for debugging) - Contains general variable definitions that can be used within Docker templates.
- See ./vars/postgres/main.yml for an example.
- Will be output to CLI if
service_data_dir
: (string) Base directory for the service data and configuration (i.e. a shared mount for the app you're deploying).service_directories
: (list, optional) List of directories to create on the remote host machine(s) you're deploying to.
config_templates
: (list, optional) List of Jinja templates that will be interpolated before being created on the remote host(s).config_templates: - src: "template.conf.j2" dest: "/path/to/output.conf" # TODO: mode not supported mode: "0644" # optional, defaults to "0644"
remove_volumes
: (boolean, default: false) Whether to remove volumes when redeployingdebug
: (boolean, default: false) Enable verbose debugging outputservice_timeout
: (integer, default: 30) Timeout for service operationsservice_host
: (string, default: "localhost") Host where service will run
Lifecycle hooks exists as a Role
. For example:
roles/gitea
βββ tasks
βββ post_deploy.yml
βββ post_healthy.yml
βββ pre_deploy.yml
1 directory, 3 files
The Docker deployment will invoke these at different stages of the deploy. This allows users/different services to inject custom code before and after the deployment happens.
pre_deploy_tasks
: (string, optional) Path to tasks file to run before deploymentpost_deploy_tasks
: (string, optional) Path to tasks file to run after deploymentpost_healthy_tasks
: (string, optional) Path to tasks file to run after service is healthy
# vars/postgres/main.yml
service_config:
name: postgres
service_directories:
- /var/lib/postgresql/data
- /etc/postgresql/conf.d
config_templates:
- src: postgresql.conf.j2
dest: /etc/postgresql/postgresql.conf
mode: "0644"
directory_mode: "0755"
service_host: "localhost"
service_timeout: 30
remove_volumes: false
You can set rollback: true
on a playbook to stop and remove a container. Set remove_volumes: true
to delete the associated volume(s).
When debug: true
is set, the template provides:
- Service configuration details
- Volume removal settings
- Container state information
- Health check status
- Container logs (if health checks fail)
- Always set
remove_volumes: false
for stateful services like databases - Use
directory_mode
to ensure proper permissions - Leverage lifecycle hooks for custom setup/teardown
- Enable debug mode during initial deployment and troubleshooting