{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
There are different ways to expose services in Kubernetes so both internal endpoints and external endpoints can access them. This Kubernetes configuration is pretty critical as the administrator could give access to attackers to services they shouldn't be able to access.
Before starting enumerating the ways K8s offers to expose services to the public, know that if you can list namespaces, services and ingresses, you can find everything exposed to the public with:
kubectl get namespace -o custom-columns='NAME:.metadata.name' | grep -v NAME | while IFS='' read -r ns; do
echo "Namespace: $ns"
kubectl get service -n "$ns"
kubectl get ingress -n "$ns"
echo "=============================================="
echo ""
echo ""
done | grep -v "ClusterIP"
# Remove the last '| grep -v "ClusterIP"' to see also type ClusterIP
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.
However, this can be accessed using the Kubernetes Proxy:
kubectl proxy --port=8080
Now, you can navigate through the Kubernetes API to access services using this scheme:
http://localhost:8080/api/v1/proxy/namespaces/<NAMESPACE>/services/<SERVICE-NAME>:<PORT-NAME>/
For example you could use the following URL:
http://localhost:8080/api/v1/proxy/namespaces/default/services/my-internal-service:http/
to access this service:
apiVersion: v1
kind: Service
metadata:
name: my-internal-service
spec:
selector:
app: my-app
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
This method requires you to run kubectl
as an authenticated user.
List all ClusterIPs:
{% code overflow="wrap" %}
kubectl get services --all-namespaces -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.spec.type,CLUSTER-IP:.spec.clusterIP,PORT(S):.spec.ports[*].port,TARGETPORT(S):.spec.ports[*].targetPort,SELECTOR:.spec.selector' | grep ClusterIP
{% endcode %}
When NodePort is utilised, a designated port is made available on all Nodes (representing the Virtual Machines). Traffic directed to this specific port is then systematically routed to the service. Typically, this method is not recommended due to its drawbacks.
List all NodePorts:
{% code overflow="wrap" %}
kubectl get services --all-namespaces -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.spec.type,CLUSTER-IP:.spec.clusterIP,PORT(S):.spec.ports[*].port,NODEPORT(S):.spec.ports[*].nodePort,TARGETPORT(S):.spec.ports[*].targetPort,SELECTOR:.spec.selector' | grep NodePort
{% endcode %}
An example of NodePort specification:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
If you don't specify the nodePort in the yaml (it's the port that will be opened) a port in the range 30000–32767 will be used.
Exposes the Service externally using a cloud provider's load balancer. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. In AWS it will launch a Load Balancer.
You have to pay for a LoadBalancer per exposed service, which can be expensive.
List all LoadBalancers:
{% code overflow="wrap" %}
kubectl get services --all-namespaces -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.spec.type,CLUSTER-IP:.spec.clusterIP,EXTERNAL-IP:.status.loadBalancer.ingress[*],PORT(S):.spec.ports[*].port,NODEPORT(S):.spec.ports[*].nodePort,TARGETPORT(S):.spec.ports[*].targetPort,SELECTOR:.spec.selector' | grep LoadBalancer
{% endcode %}
{% hint style="success" %} External IPs are exposed by services of type Load Balancers and they are generally used when an external Cloud Provider Load Balancer is being used.
For finding them, check for load balancers with values in the EXTERNAL-IP
field.
{% endhint %}
Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs
are not managed by Kubernetes and are the responsibility of the cluster administrator.
In the Service spec, externalIPs
can be specified along with any of the ServiceTypes
. In the example below, "my-service
" can be accessed by clients on "80.11.12.10:80
" (externalIP:port
)
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
From the docs: Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service
or cassandra
. You specify these Services with the spec.externalName
parameter.
This Service definition, for example, maps the my-service
Service in the prod
namespace to my.database.example.com
:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
When looking up the host my-service.prod.svc.cluster.local
, the cluster DNS Service returns a CNAME
record with the value my.database.example.com
. Accessing my-service
works in the same way as other Services but with the crucial difference that redirection happens at the DNS level rather than via proxying or forwarding.
List all ExternalNames:
{% code overflow="wrap" %}
kubectl get services --all-namespaces | grep ExternalName
{% endcode %}
Unlike all the above examples, Ingress is NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster.
You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities.
The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path based and subdomain based routing to backend services. For example, you can send everything on foo.yourdomain.com to the foo service, and everything under the yourdomain.com/bar/ path to the bar service.
The YAML for a Ingress object on GKE with a L7 HTTP Load Balancer might look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: other
servicePort: 8080
rules:
- host: foo.mydomain.com
http:
paths:
- backend:
serviceName: foo
servicePort: 8080
- host: mydomain.com
http:
paths:
- path: /bar/*
backend:
serviceName: bar
servicePort: 8080
List all the ingresses:
{% code overflow="wrap" %}
kubectl get ingresses --all-namespaces -o=custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,RULES:spec.rules[*],STATUS:status'
{% endcode %}
Although in this case it's better to get the info of each one by one to read it better:
kubectl get ingresses --all-namespaces -o=yaml
- https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
- https://kubernetes.io/docs/concepts/services-networking/service/
{% hint style="success" %}
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.