-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flag to create network policies to enable connectivity between components #74
Comments
This certainly sounds reasonable although catering for the myriad of ways it might be configured by a specific user will be tricky. We could look to whitelist the traffic allowed at a fairly granular level - it's frustrating services cannot be selected by network policies yet: https://kubernetes.io/docs/concepts/services-networking/network-policies/#what-you-can-t-do-with-network-policies-at-least-not-yet Presumably currently you have a policy along these lines? https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/03-deny-all-non-whitelisted-traffic-in-the-namespace.md It would be helpful to have the specific restrictions you have just to be clear. I think in this case it would be something similar to what we do with creating the Secrets in that the helm chart could generate some default policies but this would be opt-in for those that need them - others may want specific policies or have different set ups so we need to ensure we don't break anything at that level which is tricky to debug. |
Correct, our default network policy allows all outbound traffic but no inbound traffic. Inbound traffic must be explicitly allowed:
From what I understand there are a few rules required: I'm probably missing some because I'm having a hard time finding what exactly needs to be exposed and to which pods in the documentation |
Yeah the port definitions are all handled by Couchbase Server and can be affected by whatever Couchbase Server version you're running so typically we just link out to the documentation there, e.g. https://docs.couchbase.com/server/current/install/install-ports.html This sounds similar to some of the configuration we have to support for Istio and other service meshes. There are additional complications with some of the networking modes as well as using XDCR or SDKs too. I think this is a general issue for the operator rather than a specific helm deployment issue as we'll likely have others wanting to do the same so I'm going to raise a JIRA on getting it documented with an example and this could then just be reused for the helm deployment. I'll try to knock up a working example for you with KIND and Helm locally though to ensure you're not blocked as soon as I can. |
I have a working example here: https://github.com/patrick-stephens/couchbase-gitops/blob/96254f590bac86b2a0165e0a69b7e5cb1e77d8f1/network-policy-test.sh#L81-L152 Note I am not blocking ports at all, purely on a pod level. I've also split the DAC out into a separate namespace as per best practice for a cluster wide DAC. It would need any rules there to allow traffic between the DAC and the API, obviously |
In a production type environment which may have a default deny policy in given namespace, this helm chart doesn't provide capability to add network policies between components.(Unless I'm missing something)
Is it possible to have a flag added to create network policies between components? Limited with PodSelectors such as:
PodSelector: app=couchbase
PodSelector: app.kubernetes.io/name=couchbase-admission-controller
The text was updated successfully, but these errors were encountered: