Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task/dnsmaq setup #54

Merged
merged 4 commits into from
Aug 21, 2015
Merged

Task/dnsmaq setup #54

merged 4 commits into from
Aug 21, 2015

Conversation

altvnk
Copy link
Contributor

@altvnk altvnk commented Aug 19, 2015

Adds dnsmasq role as required in #43 and #31

#Set upstream dns servers
server=8.8.8.8
server=8.8.4.4

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here would be reasonable to add default resolver provided by cloud

@kenjones-cisco
Copy link
Contributor

Any reason not to put dnsmasq role within setup.yml after common instead of doing it separately in the other two task groupings?

with_items:
- dnsmasq
- bind-utils
when: inventory_hostname in groups[master_group_name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only installing on master, any reason for not putting it on workers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once we will have multi master, so it would be high available

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the question was simpler, why not all hosts in the cluster?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it really some reason to put dnsmasq on every node? HA? Local availability?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a specific use case, looking to understand if there is a negative impact or reason why we would not have a component across the entire cluster vs. a subset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is my thoughts: since we separate workload from control we can guarantee that master nodes would not be overloaded, destroyed and so on. So DNS will run on master without any pressure from the real workload.
The other thing is that all hosts are in a flat network. Theoretically there is might be no any issues about network connectivity that can prevent to reach DNS server on master node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable to me.

- Forget about DNS configuration provided by cloud
- Disable /etc/resolv.conf modification by dhclient
- Fix issue when kube-proxy is not started after reboot
@altvnk
Copy link
Contributor Author

altvnk commented Aug 21, 2015

Test deployment completed:

  • Clean deploy
  • Restart master, node
  • Stop/start master, node

Seems like all components running fine, DNS resolution works in both directions: nodes => services, pods => nodes

@kenjones-cisco
Copy link
Contributor

Either in this PR or in a new one, we need to look at refactoring the kube-proxy in the role kubernetes as now the same logic is within both master and minion, including exact copies of the templates.

I had started it as part of the HA since kubelet and proxy seemed to be required on the master nodes as well.

@altvnk
Copy link
Contributor Author

altvnk commented Aug 21, 2015

If this looks good, let's merge it. Then we can start refactoring in separate branch. I propose to create branch in this repo, so we can collaborate and push into it.

Kind regards,
Alexander

21 авг. 2015, в 14:36, Kenny Jones [email protected] написал(а):

Either in this PR or in a new one, we need to look at refactoring the kube-proxy in the role kubernetes as now the same logic is within both master and minion, including exact copies of the templates.

I had started it as part of the HA since kubelet and proxy seemed to be required on the master nodes as well.


Reply to this email directly or view it on GitHub.

@@ -13,6 +13,7 @@
- flannel
- master
- addons
- dnsmasq
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the sequencing of when to install and configure dnsmasq need to be last or maybe just a simple dependency to the role kubernetes since both master and minion require it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sequence if important, because until we don’t have kube-dns, dnsmasq setup is partially functional. If it’s acceptable we can make dnsmasq dependency of kubernetes or even addons.

On 21 Aug 2015, at 14:47, Kenny Jones [email protected] wrote:

In setup.yml #54 (comment):

@@ -13,6 +13,7 @@
- flannel
- master
- addons

    • dnsmasq
      does the sequencing of when to install and configure dnsmasq need to be last or maybe just a simple dependency to the role kubernetes since both master and minion require it?


Reply to this email directly or view it on GitHub https://github.com/CiscoCloud/kubernetes-ansible/pull/54/files#r37627057.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave it as-is for now. I see a role dependency tree coming.

@kenjones-cisco
Copy link
Contributor

Through a separate branch sounds good.

If you can respond to the question on sequencing within the setup.yml, we are good to merge as-is.

@kenjones-cisco
Copy link
Contributor

Feel free to merge as I'm remote for a while this morning.

altvnk added a commit that referenced this pull request Aug 21, 2015
@altvnk altvnk merged commit 5eaba06 into master Aug 21, 2015
@altvnk altvnk deleted the task/dnsmaq_setup branch August 21, 2015 12:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants