You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The last_error_event from machines should be cleared from the issue history after some time (about 6 days).
While deploying metal-stack on the new supermicro nodes, we encountered the following problem: Already allocated machines (integrated into a Kubernetes cluster) had the last_error_event of : unexpectedly received in state pxe booting.
The metal-api-liveliness is running in the metal-control-plane namespace. The logs do not show any errors for machines.
{... "msg":"machine liveliness was requested"}
{... "msg":"machine liveliness evaluated","alive":x,"dead":0,"unknown":0,"errors":0}
However, listing the machines with metalctl machine ls returns some allocated machines with a ⭕ crashloop issue.
The text was updated successfully, but these errors were encountered:
Last event error and crashloop do not depend on each other. To me it sounds like this issue is more about resetting the crashloop field, which should actually happen as soon as a machine reaches phoned home state?
The
last_error_event
from machines should be cleared from the issue history after some time (about 6 days).While deploying metal-stack on the new supermicro nodes, we encountered the following problem: Already allocated machines (integrated into a Kubernetes cluster) had the
last_error_event
of :unexpectedly received in state pxe booting
.The
metal-api-liveliness
is running in themetal-control-plane
namespace. The logs do not show any errors for machines.However, listing the machines with
metalctl machine ls
returns some allocated machines with a⭕ crashloop
issue.The text was updated successfully, but these errors were encountered: