You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Goal: Corteza maintainers can overview the performance of key components and identify problematic configurations. This should be possible (to some extent) without external tools but with the possibility of collecting metrics by an outside system (Prometheus).
Corteza already collects basic go runtime and HTTP request metrics and serves them in a format suitable for Prometheus.
A metric-proxy system is built that collects (when enabled) metrics locally and serves them on a protected endpoint (eg /debug/metrics.json).
This proxy can (when enabled) push these metrics to the Prometheus client.
Persistency and retention of metrics are configurable.
Scope:
General API requests (per URL)
Integration gateway request processing (replacing current solution)
Workflow execution
Corredor script execution
Primary store queries
DAL queries
....
Each recorded metric should be tagged with a request ID, triggering user, and system (auth client, workflow, Corredor script, url, compose page) to help pinpoint issues.
Special metrics viewer is out of scope, but the server will serve simple (plain-text) statistics on the debug endpoint.
Long-term vision:
dashboard with overview
alerting
access to metrics from individual systems (show stats when looking at specific workflow or module)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Goal: Corteza maintainers can overview the performance of key components and identify problematic configurations. This should be possible (to some extent) without external tools but with the possibility of collecting metrics by an outside system (Prometheus).
Corteza already collects basic go runtime and HTTP request metrics and serves them in a format suitable for Prometheus.
A metric-proxy system is built that collects (when enabled) metrics locally and serves them on a protected endpoint (eg
/debug/metrics.json
).This proxy can (when enabled) push these metrics to the Prometheus client.
Persistency and retention of metrics are configurable.
Scope:
Each recorded metric should be tagged with a request ID, triggering user, and system (auth client, workflow, Corredor script, url, compose page) to help pinpoint issues.
Special metrics viewer is out of scope, but the server will serve simple (plain-text) statistics on the debug endpoint.
Long-term vision:
Ideas and thoughts are welcome.
Beta Was this translation helpful? Give feedback.
All reactions