-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated IP Assignment - Design #12
Comments
Did you mean CloudProvider (OpenStack) integration? |
Sure, updated |
You are referring to providing a public IP address to the kubernetes services themselves? |
Indeed, let me know if you have other ideas |
Maybe, we should try to adopt OpenStack Magnum It's a going to become a native approach in OpenStack to provide containers to the cloud users. Here is a video from the latest summit: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/magnum-containers-as-a-service-for-openstack |
If the skydns add-on is enabled then we would have to give skydns a real network block that is addressable. Without skydns, then flannel (or the networking layer for docker used) would need a real network block that is addressable. With services as defined by Kubernetes, they are good for creating a "virtual name" for a pod, but even using LoadBalancer mode for the service you still get random port assignment. As such I had to leverage the (https://github.com/GoogleCloudPlatform/kubernetes/tree/v1.0.1/contrib/for-demos/proxy-to-service) approach to get a more constant port, that I could then provide the Openstack LoadBalancer such that then I had a public ip address. |
We have several engineers at Cisco developing magnum we should sync with, this was actually suggested to me Friday. Solving this will likely guide us to the networking solution we would like to use. We did Calico for MI but lets be open to others if there is a good reason. |
Sounds good! |
With OpenStack we will use Neutron in any case. But we can choose plugins for Neutron such as OVS, Calico, OpenDaylight, etc. |
This keeps coming back up. Are there any creative solutions that do not integrate with Openstack? We can keep our Openstack integration efforts going, but they will take awhile regardless. @ldejager Suggested to provide a sort of dynamic (dns) registration service ourselves. For example, user X spins up our k8s solution, upon completion the terraform.py posts the information that would usually go into /etc/hosts to the registration service and get’s back and prints out a unique resolvable DNS name for the instance, I.e. k8s-master.X.cs.co that points to the IP address of the master which they can then reach. |
I'm not sure but this sounds like a list of goals: 1. dns / service discovery, which may be satisfied via the k8s dns addon, and adding the dns resolver for the k8s cluster into the client side appl. 1) can be tested from the implementations available Currently there's an effort to add transparent proxying: use iptables for proxying instead of userspace #3760, and there's a contribute option to enable a bare metal solution for load balancing w/o a provider specific solution, recently moved service load-balancer . This approach claims it will support cross cluster load balancing. Let's track the iptables solution, and test the load balancing solution in our environment. An alternative implementation could be to write our own monitor that injects port forwarding to services, effectively using the default load balancing a normal service provides but forwarding the port from the public machine to the internal k8s service's ip:port. Either etcd or kube event discovery could be monitored to manage the injection and creation of the port forwarding and the default round robin load balancing will apply from the k8s service. 3) A flat address block with flannel traffic is generally NAT but we can invert the bridge to flatten the visibility of the containers. For this example assume that flannel is a private subnet of 172.24.0.0/16 managed by flannel & k8s cloudinit configuration for etcd2
flannel drop in for cloudinit [systemd]
These are directly routeable via the host routing table
So flannel fixes container to container and host to containers connectivity via ip. Bridge without overlay fabric might look like this: The bridge can be a member of a CIDR block shared by all cluster members, for now let's say a /16 address space. This CIDR block is transparent without NAT to all other cluster members including all of the member's containers.
In cloudinit format, using 10.10.0.0/16 with a gateway of 10.10.0.1 and this host assigned to 10.10.0.2/24, and docker bridge configured with 10.10.2.0/24 the configuration might look like the following:
Then configure docker to use this same subnet:
|
guestbook front end The prior example with sfs uses a fixed ip address and hardcoded ip from the service pool to map the public address [ which is assigned to the master node for this example ] to the backend guestbook. The mapping of an ip:port pair is dynamically managed via environment variables. This introduces k8s a creation sequence dependency. The environment variable for the GUESTBOOK_PORT_3000_TCP_ADDR and GUESTBOOK_SERVICE_PORT aren't available until after the object is created. kubectl create -f guestbook-fe.yaml
Alternatively with kube-dns to remove the sequence dependency from the process, the args section could be replaced by the dns reference to the service.
Alex pointed out the proxy method from google in the kubernetes repo. The gcr container uses socat, it appears from the log
Kubernetes exampled reverse proxy for dns The corresponding yaml for replication controller assuming again that the node with role=master has the public ip address creation for the guestbook might look like. Notice that this would depend on kube-dns or similar functionality active on the cluster, because it references the guestbook.default service dns reference.
|
As a tenant, I can assign an IP automatically to services on CloudProvider (OpenStack) via cmd line so that services can easily be made external
Currently tenants must modify their /etc/hosts file or do other hacky workarounds to reach the guestbook example in Kubernetes when running outside of Google App Engine. It would be great to automate this and create a better user experience.
The text was updated successfully, but these errors were encountered: