-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Task/dnsmaq setup #54
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
--- | ||
- name: restart dnsmasq | ||
sudo: yes | ||
command: systemctl restart dnsmasq |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
--- | ||
- name: install dnsmasq and bind-utils | ||
sudo: yes | ||
yum: | ||
name: "{{ item }}" | ||
state: latest | ||
with_items: | ||
- dnsmasq | ||
- bind-utils | ||
when: inventory_hostname in groups[master_group_name] | ||
tags: | ||
- dnsmasq | ||
|
||
- name: ensure dnsmasq.d directory exists | ||
sudo: yes | ||
file: | ||
path: /etc/dnsmasq.d | ||
state: directory | ||
when: inventory_hostname in groups[master_group_name] | ||
tags: | ||
- dnsmasq | ||
|
||
- name: configure dnsmasq | ||
sudo: yes | ||
template: | ||
src: 01-kube-dns.conf.j2 | ||
dest: /etc/dnsmasq.d/01-kube-dns.conf | ||
mode: 755 | ||
notify: | ||
- restart dnsmasq | ||
when: inventory_hostname in groups[master_group_name] | ||
tags: | ||
- dnsmasq | ||
|
||
- name: enable dnsmasq | ||
sudo: yes | ||
service: | ||
name: dnsmasq | ||
state: started | ||
enabled: yes | ||
when: inventory_hostname in groups[master_group_name] | ||
tags: | ||
- dnsmasq | ||
|
||
- name: update resolv.conf with new DNS setup | ||
sudo: yes | ||
template: | ||
src: resolv.conf.j2 | ||
dest: /etc/resolv.conf | ||
mode: 644 | ||
tags: | ||
- dnsmasq | ||
|
||
- name: disable resolv.conf modification by dhclient | ||
sudo: yes | ||
lineinfile: | ||
dest: "/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}" | ||
state: present | ||
regexp: '^PEERDNS' | ||
line: 'PEERDNS="no"' | ||
tags: | ||
- dnsmasq |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
#Listen on all interfaces | ||
interface=* | ||
|
||
addn-hosts=/etc/hosts | ||
|
||
bogus-priv | ||
|
||
#Set upstream dns servers | ||
server=8.8.8.8 | ||
server=8.8.4.4 | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here would be reasonable to add default resolver provided by cloud |
||
# Forward k8s domain to kube-dns | ||
server=/{{ dns_domain }}/{{ dns_server }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
; generated by ansible | ||
search {{ [ 'default.svc.' + dns_domain, 'svc.' + dns_domain, dns_domain ] | join(' ') }} | ||
{% for host in groups[master_group_name] %} | ||
nameserver {{ hostvars[host]['ansible_default_ipv4']['address'] }} | ||
{% endfor %} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Duplicates dnsmasq addresses on second run. Need to union both lists and take uniq values. Did't found how to do that yet. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would the approach taken within MI work? |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
### | ||
# kubernetes proxy config | ||
|
||
# default config should be adequate | ||
|
||
# Add your own! | ||
KUBE_PROXY_ARGS="--kubeconfig={{ kube_config_dir }}/proxy.kubeconfig" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
apiVersion: v1 | ||
kind: Config | ||
current-context: proxy-to-{{ cluster_name }} | ||
preferences: {} | ||
contexts: | ||
- context: | ||
cluster: {{ cluster_name }} | ||
user: proxy | ||
name: proxy-to-{{ cluster_name }} | ||
clusters: | ||
- cluster: | ||
certificate-authority: {{ kube_cert_dir }}/ca.crt | ||
server: https://{{ groups[master_group_name][0] }}:{{ kube_master_port }} | ||
name: {{ cluster_name }} | ||
users: | ||
- name: proxy | ||
user: | ||
token: {{ proxy_token }} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,6 +13,7 @@ | |
- flannel | ||
- master | ||
- addons | ||
- dnsmasq | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. does the sequencing of when to install and configure dnsmasq need to be last or maybe just a simple dependency to the role kubernetes since both master and minion require it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sequence if important, because until we don’t have kube-dns, dnsmasq setup is partially functional. If it’s acceptable we can make dnsmasq dependency of kubernetes or even addons.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Leave it as-is for now. I see a role dependency tree coming. |
||
|
||
# provide the execution plane | ||
- hosts: role=node | ||
|
@@ -21,3 +22,4 @@ | |
- docker | ||
- flannel | ||
- minion | ||
- dnsmasq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only installing on master, any reason for not putting it on workers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once we will have multi master, so it would be high available
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the question was simpler, why not all hosts in the cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it really some reason to put dnsmasq on every node? HA? Local availability?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a specific use case, looking to understand if there is a negative impact or reason why we would not have a component across the entire cluster vs. a subset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is my thoughts: since we separate workload from control we can guarantee that master nodes would not be overloaded, destroyed and so on. So DNS will run on master without any pressure from the real workload.
The other thing is that all hosts are in a flat network. Theoretically there is might be no any issues about network connectivity that can prevent to reach DNS server on master node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds reasonable to me.