Skip to content
This repository has been archived by the owner on Aug 9, 2024. It is now read-only.

Commit

Permalink
feat: implement some default alerts
Browse files Browse the repository at this point in the history
  • Loading branch information
jedel1043 committed Jul 19, 2024
1 parent c32b087 commit 9364fe0
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 0 deletions.
12 changes: 12 additions & 0 deletions src/alert_rules/prometheus/high_cpu_usage.rule
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
alert: SlurmHighCPUUsage
expr: (slurm_cpu_load{%%juju_topology%%} / slurm_cpus_total{%%juju_topology%%}) * 100 > 90
for: 5m
labels:
severity: warning
annotations:
summary: CPU usage for the cluster managed by the Slurm controller {{ $labels.juju_model }}/{{ $labels.juju_unit }} reached 90%
description: >
The total CPU usage for all nodes in the cluster managed by the Slurm controller
{{ $labels.juju_model }}/{{ $labels.juju_unit }} reached 90%. This could indicate that the cluster
is reaching its maximum computing capacity.
LABELS = {{ $labels }}
12 changes: 12 additions & 0 deletions src/alert_rules/prometheus/unreachable_slurmdbd.rule
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
alert: SlurmTooManyFailedDbdMessages
expr: max_over_time(slurm_dbd_agent_queue_size{%%juju_topology%%}[10s]) > 50
for: 1s
labels:
severity: critical
annotations:
summary: Slurm controller {{ $labels.juju_model }}/{{ $labels.juju_unit }} cannot reach SlurmDBD
description: >
The maximum amount of pending messages from the Slurm controller {{ $labels.juju_model }}/{{ $labels.juju_unit }}
to SlurmDBD exceeded 5000 in the past minute. This can indicate a problem to reach SlurmDBD
or its backing database.
LABELS = {{ $labels }}

0 comments on commit 9364fe0

Please sign in to comment.