forked from mantl/mantl
-
Notifications
You must be signed in to change notification settings - Fork 0
/
terraform.sample.yml
102 lines (96 loc) · 3.42 KB
/
terraform.sample.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
# CHECK SECURITY - when customizing you should leave this in. If you take it out
# and forget to specify security.yml, security could be turned off on components
# in your cluster!
- hosts: localhost
gather_facts: no
tasks:
- name: check for security
when: security_enabled is not defined or not security_enabled
fail:
msg: |
Security is not enabled. Please run `security-setup` in the root
directory and re-run this playbook with the `--extra-vars`/`-e` option
pointing to your `security.yml` (e.g., `-e @security.yml`)
# BASICS - we need every node in the cluster to have common software running to
# increase stability and enable service discovery. You can look at the
# documentation for each of these components in their README file in the
# `roles/` directory, or by checking the online documentation at
# microservices-infrastructure.readthedocs.org.
- hosts: all
vars:
# consul_acl_datacenter should be set to the datacenter you want to control
# Consul ACLs. If you only have one datacenter, set it to that or remove
# this variable.
# consul_acl_datacenter: your_primary_datacenter
# consul_dns_domain is repeated across these plays so that all the
# components know what information to use for this values to help with
# service discovery.
consul_dns_domain: consul
consul_servers_group: role=control
roles:
- common
- collectd
- logrotate
- consul-template
- docker
- logstash
- nginx
- consul
- etcd
- dnsmasq
# ROLES - after provisioning the software that's common to all hosts, we do
# specialized hosts. This configuration has two major groups: control nodes and
# worker nodes. We provision the worker nodes first so that we don't create any
# race conditions. This could happen in the Mesos components - if there are no
# worker nodes when trying to schedule control software, the deployment process
# would hang.
#
# The worker role itself has a minimal configuration, as it's designed mainly to
# run software that the Mesos leader shedules. It also forwards traffic to
# globally known ports configured through Marathon.
- hosts: role=worker
# role=worker hosts are a subset of "all". Since we already gathered facts on
# all servers, we can skip it here to speed up the deployment.
gather_facts: no
vars:
consul_dns_domain: consul
mesos_mode: follower
roles:
- calico
- mesos
- haproxy
# the control nodes are necessarily more complex than the worker nodes, and have
# ZooKeeper, Mesos, and Marathon leaders. In addition, they control Vault to
# manage secrets in the cluster. These servers do not run applications by
# themselves, they only schedule work. That said, there should be at least 3 of
# them (and always an odd number) so that ZooKeeper can get and keep a quorum.
- hosts: role=control
gather_facts: no
vars:
consul_dns_domain: consul
consul_servers_group: role=control
mesos_leaders_group: role=control
mesos_mode: leader
zookeeper_server_group: role=control
roles:
- vault
- zookeeper
- mesos
- marathon
- chronos
# GlusterFS has to be provisioned in reverse order from Mesos: servers then
# clients.
- hosts: role=control
gather_facts: no
vars:
consul_dns_domain: consul
glusterfs_mode: server
roles:
- glusterfs
- hosts: role=worker
vars:
consul_dns_domain: consul
glusterfs_mode: client
roles:
- glusterfs