Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node-problem-detector not able to detect kernel log events for a Kind cluster #859

Open
pravarag opened this issue Feb 7, 2024 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pravarag
Copy link

pravarag commented Feb 7, 2024

I've been trying to run node-problem-detector on a local kind cluster with 3 nodes (1 master, 2 worker). And after installing it as DaemonSet, firstly I'm seeing there are three pods running across three nodes including master. And also, when I pass any Kernel message as test, I don't see any events getting generated either in npd pod nor in the node's description.

@wangzhen127
Copy link
Member

You may need to tune your daemonset yaml

@BenTheElder
Copy link
Member

Note: kind clusters are sharing the host kernel with sketchy isolation.

What's the use case for NPD-on-kind?

@cmontemuino
Copy link

Note: kind clusters are sharing the host kernel with sketchy isolation.

What's the use case for NPD-on-kind?

It's local testing and CI in my case.

@BenTheElder
Copy link
Member

For testing NPD a fake should be used or a remote VM, we shouldn't introduce issues into the CI host's kernel and if we don't then we won't see any?

for local development, you could use a VM or local-up-cluster.sh or kubeadm init

kind is generally attempting to create a container that appears like a node, but it's on a shared kernel, in a container, which kubelet doesn't clearly support.

in general kind works best for testing API interactions and node to node interactions but not kernel / host / resource limits for now unfortunately

@cmontemuino
Copy link

Just in case it helps other people, the following configuration works pretty well with my KinD installation:

--config.system-log-monitor=/config/kernel-monitor.json,/config/systemd-monitor.json \
--config.custom-plugin-monitor=/config/iptables-mode-monitor.json,/config/network-problem-monitor.json,/config/kernel-monitor-counter.json,/config/systemd-monitor-counter.json

That helped me to quickly understand what's going on behind the scenes, and then deploy node-problem-detector in our clusters.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 16, 2025
@proto-h
Copy link

proto-h commented Jan 20, 2025

There is a bug in handling time for kmsg around this check https://github.com/kubernetes/node-problem-detector/blob/v0.8.20/pkg/systemlogmonitor/logwatchers/kmsg/log_watcher_linux.go#L111 - this check simply does not work correctly probably due a bug in the euank/go-kmsg-parser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants