Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enabling alert throttling by default #457

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Prateeknandle
Copy link
Contributor

No description provided.

Copy link
Member

@rootxrishabh rootxrishabh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functionality is working as expected, flag values are showing in KubeArmorConfig -

@rootxrishabh ➜ /workspaces/kubearmor-client (default-throttling) $ ./karmor install --maxAlertPerSec 20 --throttleSec 40
🛡       Installed helm release : kubearmor-operator
😄      KubeArmorConfig created
⌚️      This may take a couple of minutes                     
🥳      KubeArmor Snitch Deployed!             
🥳      KubeArmor Daemonset Deployed!             
🥳      Done Checking , ALL Services are running!             
⌚️      Execution Time : 1m36.518146521s 

@rootxrishabh ➜ /workspaces/kubearmor-client (default-throttling) $ kubectl get kubearmorconfig  kubearmorconfig-default -n kubearmor -o yaml
apiVersion: operator.kubearmor.com/v1
kind: KubeArmorConfig
metadata:
  creationTimestamp: "2024-11-15T09:00:40Z"
  generation: 1
  labels:
    app.kubernetes.io/created-by: kubearmoroperator
    app.kubernetes.io/instance: kubearmorconfig-default
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/name: kubearmorconfig
    app.kubernetes.io/part-of: kubearmoroperator
  name: kubearmorconfig-default
  namespace: kubearmor
  resourceVersion: "1216"
  uid: 2e064bc2-304c-4d52-ae37-f719946900f1
spec:
  alertThrottling: true
  kubeRbacProxyImage:
    imagePullPolicy: Always
  kubearmorControllerImage:
    image: kubearmor/kubearmor-controller:stable
    imagePullPolicy: Always
  kubearmorImage:
    image: kubearmor/kubearmor:stable
    imagePullPolicy: Always
  kubearmorInitImage:
    image: kubearmor/kubearmor-init:stable
    imagePullPolicy: Always
  kubearmorRelayImage:
    image: kubearmor/kubearmor-relay-server:stable
    imagePullPolicy: Always
  maxAlertPerSec: 20
  throttleSec: 40

However, Kubearmor pod logs do not reflect the same in the final config -

2024-11-15 09:02:16.876985      INFO    Final Configuration [{Cluster:default Host:kind-control-plane GRPC:32767 TLSEnabled:false TLSCertPath:/var/lib/kubearmor/tls TLSCertProvider:self LogPath:none SELinuxProfileDir: CRISocket: Visibility:process,file,network,capabilities HostVisibility:none Policy:true HostPolicy:false KVMAgent:false K8sEnv:true Debug:false DefaultFilePosture:audit DefaultNetworkPosture:audit DefaultCapabilitiesPosture:audit HostDefaultFilePosture:audit HostDefaultNetworkPosture:audit HostDefaultCapabilitiesPosture:audit CoverageTest:false ConfigUntrackedNs:[kube-system kubearmor] LsmOrder:[bpf apparmor selinux] BPFFsPath:/sys/fs/bpf EnforcerAlerts:true DefaultPostureLogs:true InitTimeout:60s StateAgent:false AlertThrottling:true MaxAlertPerSec:10 ThrottleSec:30 AnnotateResources:false}]

MaxAlertPerSec:10 and ThrottleSec:30 are incorrect values.
@DelusionalOptimist @rksharma95

@rksharma95
Copy link
Contributor

MaxAlertPerSec:10 and ThrottleSec:30 are incorrect values.

@rootxrishabh it was expected, these configuration values in the logs reflects values passed as command-line args, whereas operator handles kubearmorconfig values when it gets event from k8swatcher.

@Aryan-sharma11 Aryan-sharma11 force-pushed the default-throttling branch 20 times, most recently from 7f96748 to f0c005c Compare December 3, 2024 04:29
Signed-off-by: Aryan-sharma11 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants