From 1ca7c16635d63a4478498bfba05ed7cf3912bb14 Mon Sep 17 00:00:00 2001 From: aetheryx Date: Thu, 3 Aug 2023 16:40:09 +0200 Subject: [PATCH] Formatting and grammar fixes in README Signed-off-by: aetheryx --- README.md | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index fc02f842..996d7b31 100644 --- a/README.md +++ b/README.md @@ -76,28 +76,28 @@ If you are still using the legacy [Access scopes][access-scopes], the `https://w ### Flags -| Flag | Required | Default | Description | -| ----------------------------------- | -------- |---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `google.project-id` | No | GCloud SDK auto-discovery | Comma seperated list of Google Project IDs | -| `monitoring.metrics-ingest-delay` | No | | Offsets metric collection by a delay appropriate for each metric type, e.g. because bigquery metrics are slow to appear | -| `monitoring.drop-delegated-projects | No | No | Drop metrics from attached projects and fetch `project_id` only. | -| `monitoring.metrics-type-prefixes` | Yes | | Comma separated Google Stackdriver Monitoring Metric Type prefixes (see [example][metrics-prefix-example] and [available metrics][metrics-list]) | -| `monitoring.metrics-interval` | No | `5m` | Metric's timestamp interval to request from the Google Stackdriver Monitoring Metrics API. Only the most recent data point is used | -| `monitoring.metrics-offset` | No | `0s` | Offset (into the past) for the metric's timestamp interval to request from the Google Stackdriver Monitoring Metrics API, to handle latency in published metrics | -| `monitoring.filters` | No | | Formatted string to allow filtering on certain metrics type | -| `monitoring.aggregate-deltas` | No | | If enabled will treat all DELTA metrics as an in-memory counter instead of a gauge. Be sure to read [what to know about aggregating DELTA metrics](#what-to-know-about-aggregating-delta-metrics) | -| `monitoring.aggregate-deltas-ttl` | No | `30m` | How long should a delta metric continue to be exported and stored after GCP stops producing it. Read [slow moving metrics](#slow-moving-metrics) to understand the problem this attempts to solve | -| `monitoring.descriptor-cache-ttl` | No | `0s` | How long should the metric descriptors for a prefixed be cached for | -| `stackdriver.max-retries` | No | `0` | Max number of retries that should be attempted on 503 errors from stackdriver. | -| `stackdriver.http-timeout` | No | `10s` | How long should stackdriver_exporter wait for a result from the Stackdriver API. | -| `stackdriver.max-backoff=` | No | | Max time between each request in an exp backoff scenario. | -| `stackdriver.backoff-jitter` | No | `1s | The amount of jitter to introduce in a exp backoff scenario. | -| `stackdriver.retry-statuses` | No | `503` | The HTTP statuses that should trigger a retry. | -| `web.config.file` | No | | [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication. | -| `web.listen-address` | No | `:9255` | Address to listen on for web interface and telemetry Repeatable for multiple addresses. | -| `web.systemd-socket` | No | | Use systemd socket activation listeners instead of port listeners (Linux only). | -| `web.stackdriver-telemetry-path` | No | "/metrics" | Path under which to expose Stackdriver metrics. | -| `web.telemetry-path` | No | `/metrics` | Path under which to expose Prometheus metrics | +| Flag | Required | Default | Description | +| ------------------------------------ | -------- |---------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `google.project-id` | No | GCloud SDK auto-discovery | Comma seperated list of Google Project IDs | +| `monitoring.metrics-ingest-delay` | No | | Offsets metric collection by a delay appropriate for each metric type, e.g. because bigquery metrics are slow to appear | +| `monitoring.drop-delegated-projects` | No | No | Drop metrics from attached projects and fetch `project_id` only. | +| `monitoring.metrics-type-prefixes` | Yes | | Comma separated Google Stackdriver Monitoring Metric Type prefixes (see [example][metrics-prefix-example] and [available metrics][metrics-list]) | +| `monitoring.metrics-interval` | No | `5m` | Metric's timestamp interval to request from the Google Stackdriver Monitoring Metrics API. Only the most recent data point is used | +| `monitoring.metrics-offset` | No | `0s` | Offset (into the past) for the metric's timestamp interval to request from the Google Stackdriver Monitoring Metrics API, to handle latency in published metrics | +| `monitoring.filters` | No | | Formatted string to allow filtering on certain metrics type | +| `monitoring.aggregate-deltas` | No | | If enabled will treat all DELTA metrics as an in-memory counter instead of a gauge. Be sure to read [what to know about aggregating DELTA metrics](#what-to-know-about-aggregating-delta-metrics) | +| `monitoring.aggregate-deltas-ttl` | No | `30m` | How long should a delta metric continue to be exported and stored after GCP stops producing it. Read [slow moving metrics](#slow-moving-metrics) to understand the problem this attempts to solve | +| `monitoring.descriptor-cache-ttl` | No | `0s` | How long should the metric descriptors for a prefixed be cached for | +| `stackdriver.max-retries` | No | `0` | Max number of retries that should be attempted on 503 errors from stackdriver. | +| `stackdriver.http-timeout` | No | `10s` | How long should stackdriver_exporter wait for a result from the Stackdriver API. | +| `stackdriver.max-backoff=` | No | | Max time between each request in an exp backoff scenario. | +| `stackdriver.backoff-jitter` | No | `1s` | The amount of jitter to introduce in a exp backoff scenario. | +| `stackdriver.retry-statuses` | No | `503` | The HTTP statuses that should trigger a retry. | +| `web.config.file` | No | | [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication. | +| `web.listen-address` | No | `:9255` | Address to listen on for web interface and telemetry Repeatable for multiple addresses. | +| `web.systemd-socket` | No | | Use systemd socket activation listeners instead of port listeners (Linux only). | +| `web.stackdriver-telemetry-path` | No | `/metrics` | Path under which to expose Stackdriver metrics. | +| `web.telemetry-path` | No | `/metrics` | Path under which to expose Prometheus metrics | ### TLS and basic authentication @@ -192,7 +192,7 @@ There are two features which attempt to combat this issue, The configuration when using `monitoring.aggregate-deltas` gives a 30 minute buffer to slower moving metrics and `monitoring.aggregate-deltas-ttl` can be adjusted to tune memory requirements vs correctness. Storing the data for longer results in a higher memory cost. -The feature which continues to export metrics which are not collected can cause `the sample has been rejected because another sample with the same timestamp, but a different value, has already been ingested` if your [scrape config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) for the exporter has `honor_timestamps` enabled (this is the default value). This is caused by the fact that it's not possible to know the different between GCP having late arriving data and GCP not exporting a value. The underlying counter is still incremented when this happens so the next reported sample will show a higher rate than expected. +The feature which continues to export metrics which are not collected can cause `the sample has been rejected because another sample with the same timestamp, but a different value, has already been ingested` if your [scrape config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) for the exporter has `honor_timestamps` enabled (this is the default value). This is caused by the fact that it's not possible to know the difference between GCP having late arriving data and GCP not exporting a value. The underlying counter is still incremented when this happens so the next reported sample will show a higher rate than expected. ## Contributing