-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8smeta does not populate its events even though it is configured correctly and there are no errors #514
Comments
Hi @fjellvannet, unfortunately, I'm not able to reproduce your issue. It works on my side. I installed Falco:
I added the custom rule as you did. And here is the output of Falco:
|
I just reinstalled everything in the same way and still have the error. Did you try on microk8s specifically? I don't know if it is part of the problem. To give falco access to the containerd socket (which microk8s has put in a snappy directory apart from the default containerd socket), I had to create the empty files
It makes the microk8s containerd socket accessible for falco in the default location. This hack fixes the k8s.pod.name field, but not k8smeta.pod.name What kind of cluster did you use to test? |
Hey @fjellvannet, I used a kubeadm cluster. Can you share the instructions on how to create your environment? |
Start with a vanilla Ubuntu 24.04 server amd64 machine. Install the microk8s snap: Add user to the microk8s group to steer microk8s without sudo: Install / set up kubeconfig for kubectl / helm etc., Install the kube-prometheus-stack, as grafana constantly triggers my custom rule: Create customRules:
rules-k8smeta.yaml: |-
- macro: k8s_containers
condition: >
(container.image.repository in (gcr.io/google_containers/hyperkube-amd64,
gcr.io/google_containers/kube2sky,
docker.io/sysdig/sysdig, sysdig/sysdig,
fluent/fluentd-kubernetes-daemonset, prom/prometheus,
falco_containers,
falco_no_driver_containers,
ibm_cloud_containers,
velero/velero,
quay.io/jetstack/cert-manager-cainjector, weaveworks/kured,
quay.io/prometheus-operator/prometheus-operator,
registry.k8s.io/ingress-nginx/kube-webhook-certgen, quay.io/spotahome/redis-operator,
registry.opensource.zalan.do/acid/postgres-operator, registry.opensource.zalan.do/acid/postgres-operator-ui,
rabbitmqoperator/cluster-operator, quay.io/kubecost1/kubecost-cost-model,
docker.io/bitnami/prometheus, docker.io/bitnami/kube-state-metrics, mcr.microsoft.com/oss/azure/aad-pod-identity/nmi)
or (k8s.ns.name = "kube-system"))
- macro: never_true
condition: (evt.num=0)
- macro: container
condition: (container.id != host)
- macro: k8s_api_server
condition: (fd.sip.name="kubernetes.default.svc.cluster.local")
- macro: user_known_contact_k8s_api_server_activities
condition: (never_true)
- rule: Custom Contact K8S API Server From Container
desc: >
Detect attempts to communicate with the K8S API Server from a container by non-profiled users. Kubernetes APIs play a
pivotal role in configuring the cluster management lifecycle. Detecting potential unauthorized access to the API server
is of utmost importance. Audit your complete infrastructure and pinpoint any potential machines from which the API server
might be accessible based on your network layout. If Falco can't operate on all these machines, consider analyzing the
Kubernetes audit logs (typically drained from control nodes, and Falco offers a k8saudit plugin) as an additional data
source for detections within the control plane.
condition: >
evt.type=connect and evt.dir=<
and (fd.typechar=4 or fd.typechar=6)
and container
and k8s_api_server
and not k8s_containers
and not user_known_contact_k8s_api_server_activities
output: Custom Unexpected connection to K8s API Server from container (connection=%fd.name lport=%fd.lport rport=%fd.rport fd_type=%fd.type fd_proto=%fd.l4proto evt_type=%evt.type user=%user.name user_uid=%user.uid user_loginuid=%user.loginuid process=%proc.name proc_exepath=%proc.exepath parent=%proc.pname command=%proc.cmdline k8s_podname=%k8smeta.pod.name orig_podname=%k8s.pod.name terminal=%proc.tty %container.info)
priority: NOTICE
tags: [maturity_stable, container, network, k8s, mitre_discovery, T1565] Deploy falco using Helm and make sure the custom rule is evaluated before the default rule: helm upgrade --install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set collectors.kubernetes.enabled=true \
--set falco.rules_file="{/etc/falco/rules.d}" \
-f falco-rules-k8smeta.yaml Create the following bash-script that adjusts the path of the volume that mounts the containerd-socket to falco. Microk8s uses its own containerd instance, the socket is stored in #!/bin/bash
# Replace <name> with your DaemonSet's name
DAEMONSET_NAME="falco"
# Find the index of the 'containerd-socket' volume
INDEX=$(kubectl -n falco get daemonset "$DAEMONSET_NAME" -o json | jq '.spec.template.spec.volumes | map(.name) | index("containerd-socket")')
# Check if the volume was found
if [ "$INDEX" = "null" ]; then
echo "Volume 'containerd-socket' not found."
exit 1
fi
# Construct the JSON Patch
PATCH="[{\"op\": \"replace\", \"path\": \"/spec/template/spec/volumes/$INDEX/hostPath/path\", \"value\": \"/var/snap/microk8s/common/run\"}]"
# Apply the patch
kubectl patch daemonset "$DAEMONSET_NAME" --type='json' -p="$PATCH" When the daemonset has updated and the pod has restarted, enjoy:
In the k8smeta field k8s_podname which I added, the value is If I have set up sth wrong here or forgotten sth according to the documentation, please tell me :) |
Hi @fjellvannet, it turns out that you are right. The plugin does not populate fields for containers that existed before Falco was deployed. We are working on a fix. We are going to release a new plugin version in the coming days. Thanks for your effort in helping us to discover the bug. |
Hey @fjellvannet, the latest helm chart of Falco includes the fix. Could you please try it out? |
@fjellvannet, can you share the falco logs? The plugin should have scanned the /proc for existing processes. |
I ran the following command to produce these logs: helm upgrade --install falco falcosecurity/falco -f falco-rules-k8smeta.yaml -f falco-values.yaml -n falco --create-namespace
stern -n falco ".*" | tee falco-logs.txt
|
Is this still an issue? 🤔 |
I cannot check this until next week as I am on vacation with an unstable
internet connection
I can check next week
ons. 28. aug. 2024, 16:32 skrev Leonardo Grasso ***@***.***>:
… Is this still an issue? 🤔
—
Reply to this email directly, view it on GitHub
<#514 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFLABQGMVU4HDI3FQZDK46DZTXGPJAVCNFSM6AAAAABKDKTGE2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMJVGMZTANZXGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hey @fjellvannet, |
Describe the bug
I set up k8smeta and k8smetacollector according to this command (line 273 in Falco's official Helm chart):
I create and add a custom syscall rule that often triggers in my deployment, and use a k8smeta-field,
k8smeta.pod.name
to be precise. I would expect this field to be populated, but it returnsN/A
. Sorry for this bug report being very long, I just included a lot of context :)How to reproduce it
Deploy falco with the following command using its Helm-chart:
helm upgrade --install falco falcosecurity/falco --namespace falco --create-namespace -f falco-values.yaml
falco-values.yaml has the following contents:
The included custom rule is a copy of the
Custom Contact K8S API Server From Container
rule with all its dependencies. The only modification is that two new fieldsk8s_podname=%k8smeta.pod.name
andorig_podname=%k8s.pod.name
are added to the output. Theorig_podname
field is populated - it says the same ask8s.pod.name
in the output. However,k8s_podname
remainsN/A
, and I would expect this field to be populated if the same value is available ink8s.pod.name
, which is said to only be kept alive for backwards compatibility purposes (line 250 in Falco's official Helm chart).Expected behaviour
I would expect that if
k8s.pod.name
is populated with a value,k8smeta.pod.name
should also be populated.Screenshots
Checking out the events in the UI, we see that the k8s_podname field remains
N/A
while orig_podname gets the same value as k8s.pod.name.falco-k8s-metacollector is running in the same namespace as the falco-pods and the UI.
Logs from the artifact-install-container show that k8smeta is indeed installed correctly.
Environment
/var/snap/microk8s/common/run/containerd.sock
. In that position falco does not find it, so I created an empty file in/run/containerd/containerd.sock
, and then usedsudo mount --bind /var/snap/microk8s/common/run/containerd.sock /run/containerd/containerd.sock
to make it accessible for falco. That seems to work, as before this change the pod- and containername wereN/A
in the UI as well, and now they are populated, so falco seems to have access to the containerd socket at least. Changing the deployment volumecontainerd-socket
to mount the/var/snap/microk8s/common/run
to falco instead of/run/containerd
also works.Linux microk8s-1 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
helm upgrade --install falco falcosecurity/falco --namespace falco --create-namespace -f falco-values.yaml
, see above.Additional context
/etc/falco/falco.yaml
pulled from one of the falco-pods:As far as I can see, k8smeta and k8smeta collector are configured correctly here in the config as well. I experimented with changing the port or hostname of the metacollector, and then I got errors, same when I turned on SSL without fixing the certificates. This screenshot from the falco-container log also confirms that k8smeta is running - it says that it received at least one event from k8s-metacollector, indicating that their connection should be OK.
Also here it looks as if the k8smeta plugin is healthy. When I removed
collectors.kubernetes.enabled=true
, falco would not start any longer claiming that I used an invalid value in my rule inrules-k8smeta.yaml
, the invalid value beingk8smeta.pod.name
, which is another indication ofk8smeta
likely being set up correctly.The text was updated successfully, but these errors were encountered: