Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task/dnsmaq setup #54

Merged
merged 4 commits into from
Aug 21, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions roles/dnsmasq/handlers/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
- name: restart dnsmasq
sudo: yes
command: systemctl restart dnsmasq
62 changes: 62 additions & 0 deletions roles/dnsmasq/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
- name: install dnsmasq and bind-utils
sudo: yes
yum:
name: "{{ item }}"
state: latest
with_items:
- dnsmasq
- bind-utils
when: inventory_hostname in groups[master_group_name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only installing on master, any reason for not putting it on workers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we will have multi master, so it would be high available

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the question was simpler, why not all hosts in the cluster?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it really some reason to put dnsmasq on every node? HA? Local availability?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a specific use case, looking to understand if there is a negative impact or reason why we would not have a component across the entire cluster vs. a subset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is my thoughts: since we separate workload from control we can guarantee that master nodes would not be overloaded, destroyed and so on. So DNS will run on master without any pressure from the real workload.
The other thing is that all hosts are in a flat network. Theoretically there is might be no any issues about network connectivity that can prevent to reach DNS server on master node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable to me.

tags:
- dnsmasq

- name: ensure dnsmasq.d directory exists
sudo: yes
file:
path: /etc/dnsmasq.d
state: directory
when: inventory_hostname in groups[master_group_name]
tags:
- dnsmasq

- name: configure dnsmasq
sudo: yes
template:
src: 01-kube-dns.conf.j2
dest: /etc/dnsmasq.d/01-kube-dns.conf
mode: 755
notify:
- restart dnsmasq
when: inventory_hostname in groups[master_group_name]
tags:
- dnsmasq

- name: enable dnsmasq
sudo: yes
service:
name: dnsmasq
state: started
enabled: yes
when: inventory_hostname in groups[master_group_name]
tags:
- dnsmasq

- name: update resolv.conf with new DNS setup
sudo: yes
template:
src: resolv.conf.j2
dest: /etc/resolv.conf
mode: 644
tags:
- dnsmasq

- name: disable resolv.conf modification by dhclient
sudo: yes
lineinfile:
dest: "/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}"
state: present
regexp: '^PEERDNS'
line: 'PEERDNS="no"'
tags:
- dnsmasq
13 changes: 13 additions & 0 deletions roles/dnsmasq/templates/01-kube-dns.conf.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#Listen on all interfaces
interface=*

addn-hosts=/etc/hosts

bogus-priv

#Set upstream dns servers
server=8.8.8.8
server=8.8.4.4

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here would be reasonable to add default resolver provided by cloud

# Forward k8s domain to kube-dns
server=/{{ dns_domain }}/{{ dns_server }}
5 changes: 5 additions & 0 deletions roles/dnsmasq/templates/resolv.conf.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
; generated by ansible
search {{ [ 'default.svc.' + dns_domain, 'svc.' + dns_domain, dns_domain ] | join(' ') }}
{% for host in groups[master_group_name] %}
nameserver {{ hostvars[host]['ansible_default_ipv4']['address'] }}
{% endfor %}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicates dnsmasq addresses on second run. Need to union both lists and take uniq values. Did't found how to do that yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 changes: 1 addition & 1 deletion roles/kubernetes/tasks/gen_tokens.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
environment:
TOKEN_DIR: "{{ kube_token_dir }}"
with_nested:
- [ "system:controller_manager", "system:scheduler", "system:kubectl" ]
- [ "system:controller_manager", "system:scheduler", "system:kubectl", 'system:proxy' ]
- "{{ groups[master_group_name] }}"
register: gentoken
changed_when: "'Added' in gentoken.stdout"
Expand Down
7 changes: 7 additions & 0 deletions roles/master/handlers/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
- restart apiserver
- restart controller-manager
- restart scheduler
- restart proxy

- name: restart apiserver
sudo: yes
Expand All @@ -24,3 +25,9 @@
service:
name: kube-scheduler
state: restarted

- name: restart proxy
sudo: yes
service:
name: kube-proxy
state: restarted
27 changes: 26 additions & 1 deletion roles/master/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
- name: install kubernetes master
sudo: yes
yum:
pkg=kubernetes-master
pkg=kubernetes
state=latest
enablerepo=virt7-docker-common-candidate
notify:
Expand All @@ -18,6 +18,7 @@
- "system:controller_manager"
- "system:scheduler"
- "system:kubectl"
- "system:proxy"
register: tokens
delegate_to: "{{ groups[master_group_name][0] }}"
tags:
Expand All @@ -28,6 +29,7 @@
controller_manager_token: "{{ tokens.results[0].content|b64decode }}"
scheduler_token: "{{ tokens.results[1].content|b64decode }}"
kubectl_token: "{{ tokens.results[2].content|b64decode }}"
proxy_token: "{{ tokens.results[3].content|b64decode }}"
tags:
- master

Expand Down Expand Up @@ -77,6 +79,20 @@
tags:
- master

- name: write the config files for proxy
sudo: yes
template: src=proxy.j2 dest={{ kube_config_dir }}/proxy
notify:
- restart daemons
tags:
- master

- name: write the kubecfg (auth) file for proxy
sudo: yes
template: src=proxy.kubeconfig.j2 dest={{ kube_config_dir }}/proxy.kubeconfig
tags:
- master

- name: populate users for basic auth in API
sudo: yes
lineinfile:
Expand Down Expand Up @@ -113,5 +129,14 @@
name: kube-scheduler
enabled: yes
state: started
tags:
- master

- name: Enable kube-proxy
sudo: yes
service:
name: kube-proxy
enabled: yes
state: started
tags:
- master
7 changes: 7 additions & 0 deletions roles/master/templates/proxy.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--kubeconfig={{ kube_config_dir }}/proxy.kubeconfig"
18 changes: 18 additions & 0 deletions roles/master/templates/proxy.kubeconfig.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: v1
kind: Config
current-context: proxy-to-{{ cluster_name }}
preferences: {}
contexts:
- context:
cluster: {{ cluster_name }}
user: proxy
name: proxy-to-{{ cluster_name }}
clusters:
- cluster:
certificate-authority: {{ kube_cert_dir }}/ca.crt
server: https://{{ groups[master_group_name][0] }}:{{ kube_master_port }}
name: {{ cluster_name }}
users:
- name: proxy
user:
token: {{ proxy_token }}
2 changes: 2 additions & 0 deletions setup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
- flannel
- master
- addons
- dnsmasq
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the sequencing of when to install and configure dnsmasq need to be last or maybe just a simple dependency to the role kubernetes since both master and minion require it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sequence if important, because until we don’t have kube-dns, dnsmasq setup is partially functional. If it’s acceptable we can make dnsmasq dependency of kubernetes or even addons.

On 21 Aug 2015, at 14:47, Kenny Jones [email protected] wrote:

In setup.yml #54 (comment):

@@ -13,6 +13,7 @@
- flannel
- master
- addons

    • dnsmasq
      does the sequencing of when to install and configure dnsmasq need to be last or maybe just a simple dependency to the role kubernetes since both master and minion require it?


Reply to this email directly or view it on GitHub https://github.com/CiscoCloud/kubernetes-ansible/pull/54/files#r37627057.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave it as-is for now. I see a role dependency tree coming.


# provide the execution plane
- hosts: role=node
Expand All @@ -21,3 +22,4 @@
- docker
- flannel
- minion
- dnsmasq