diff --git a/.github/workflows/trivy.yml b/.github/workflows/trivy.yml index 92b45f9c20..37b064600a 100644 --- a/.github/workflows/trivy.yml +++ b/.github/workflows/trivy.yml @@ -26,7 +26,7 @@ jobs: - name: Checkout code uses: actions/checkout@v4 - name: Run Trivy vulnerability scanner - uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2 + uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0 with: image-ref: 'grafana/alloy-dev:latest' format: 'template' diff --git a/CHANGELOG.md b/CHANGELOG.md index 1a60ac6554..eaf39ee382 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,8 +22,6 @@ Main (unreleased) - Add `otelcol.exporter.syslog` component to export logs in syslog format (@dehaansa) -- Add `otelcol.receiver.influxdb` to convert influx metric into OTEL. (@EHSchmitt4395) - ### Enhancements - Add second metrics sample to the support bundle to provide delta information (@dehaansa) @@ -32,9 +30,22 @@ Main (unreleased) - Add relevant golang environment variables to the support bundle (@dehaansa) +- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr) + +- Update mysqld_exporter from v0.15.0 to v0.16.0 (including 2ef168bf6), most notable changes: (@cristiangreco) + - Support MySQL 8.4 replicas syntax + - Fetch lock time and cpu time from performance schema + - Fix fetching tmpTables vs tmpDiskTables from performance_schema + - Skip SPACE_TYPE column for MariaDB >=10.5 + - Fixed parsing of timestamps with non-zero padded days + - Fix auto_increment metric collection errors caused by using collation in INFORMATION_SCHEMA searches + - Change processlist query to support ONLY_FULL_GROUP_BY sql_mode + - Add perf_schema quantile columns to collector + ### Bugfixes - Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa) + - Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar) - Fixed issue with reloading configuration and prometheus metrics duplication in `prometheus.write.queue`. (@mattdurham) @@ -106,6 +117,8 @@ v1.5.0 - Add `proxy_url` to `otelcol.exporter.otlphttp`. (@wildum) +- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig) + ### Bugfixes - Fixed a bug in `import.git` which caused a `"non-fast-forward update"` error message. (@ptodev) diff --git a/docs/sources/collect/_index.md b/docs/sources/collect/_index.md index ad88e94104..8411043ddb 100644 --- a/docs/sources/collect/_index.md +++ b/docs/sources/collect/_index.md @@ -8,4 +8,4 @@ weight: 100 # Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}} -{{< section >}} \ No newline at end of file +{{< section >}} diff --git a/docs/sources/collect/choose-component.md b/docs/sources/collect/choose-component.md index 05f9d4df0b..36a880d54c 100644 --- a/docs/sources/collect/choose-component.md +++ b/docs/sources/collect/choose-component.md @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want ## Metrics for infrastructure Use `prometheus.*` components to collect infrastructure metrics. -This will give you the best experience with [Grafana Infrastructure Observability][]. +This gives you the best experience with [Grafana Infrastructure Observability][]. -For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, -and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. +For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. You can also scrape any Prometheus endpoint using `prometheus.scrape`. Use `discovery.*` components to find targets for `prometheus.scrape`. @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`. ## Metrics for applications Use `otelcol.receiver.*` components to collect application metrics. -This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. +This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications. @@ -48,12 +47,12 @@ with logs collected by `loki.*` components. For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`. On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`, -which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem. +which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem. ## Logs from applications Use `otelcol.receiver.*` components to collect application logs. -This will gather the application logs in an OpenTelemetry-native way, making it easier to +This gathers the application logs in an OpenTelemetry-native way, making it easier to correlate the logs with OpenTelemetry metrics and traces coming from the application. All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation. @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri Use `otelcol.receiver.*` components to collect traces. -If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. +If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. ## Profiles diff --git a/docs/sources/collect/datadog-traces-metrics.md b/docs/sources/collect/datadog-traces-metrics.md index 034a093e8c..2ab9da3590 100644 --- a/docs/sources/collect/datadog-traces-metrics.md +++ b/docs/sources/collect/datadog-traces-metrics.md @@ -20,9 +20,9 @@ This topic describes how to: ## Before you begin -* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces. -* Identify where you will write the collected telemetry. - Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. +* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. +* Identify where to write the collected telemetry. + Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces. * Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. + * _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data ```alloy otelcol.processor.deltatocumulative "default" { - max_stale = “” + max_stale = "" max_streams = output { metrics = [otelcol.processor.batch.default.input] @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: How long until a series not receiving new samples is removed, such as "5m". - - _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. + * _``_: How long until a series not receiving new samples is removed, such as "5m". + * _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. 1. Add the following `otelcol.receiver.datadog` component to your configuration file. ```alloy otelcol.receiver.datadog "default" { - endpoint = “:” + endpoint = ":" output { metrics = [otelcol.processor.deltatocumulative.default.input] traces = [otelcol.processor.batch.default.input] @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The host address where the receiver will listen. - - _``_: The port where the receiver will listen. + * _``_: The host address where the receiver listens. + * _``_: The port where the receiver listens. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -139,10 +139,10 @@ We recommend this approach for current Datadog users who want to try using {{< p Replace the following: - - _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. - - _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. + * _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. + * _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. -Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. +Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. You can do this by setting up your Datadog Agent in the following way: 1. Replace the DD_URL in the configuration YAML: @@ -150,8 +150,8 @@ You can do this by setting up your Datadog Agent in the following way: ```yaml dd_url: http://: ``` -Or by setting an environment variable: + Or by setting an environment variable: ```bash DD_DD_URL='{"http://:": ["datadog-receiver"]}' @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit [Datadog]: https://www.datadoghq.com/ [Datadog Agent]: https://docs.datadoghq.com/agent/ [Prometheus]: https://prometheus.io -[OTLP]: https://opentelemetry.io/docs/specs/otlp/ -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[Components]: ../../get-started/components +[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/ +[Components]: ../../get-started/components/ diff --git a/docs/sources/collect/ecs-openteletry-data.md b/docs/sources/collect/ecs-opentelemetry-data.md similarity index 89% rename from docs/sources/collect/ecs-openteletry-data.md rename to docs/sources/collect/ecs-opentelemetry-data.md index 3a7a53a483..428bf0e926 100644 --- a/docs/sources/collect/ecs-openteletry-data.md +++ b/docs/sources/collect/ecs-opentelemetry-data.md @@ -1,5 +1,7 @@ --- canonical: https://grafana.com/docs/alloy/latest/collect/ecs-opentelemetry-data/ +alias: + - ./ecs-openteletry-data/ # /docs/alloy/latest/collect/ecs-openteletry-data/ description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint menuTitle: Collect ECS or Fargate OpenTelemetry data title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data @@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle 1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store). 1. [Create an ECS task definition](#create-an-ecs-task-definition). -1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar). +1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar) ## Before you begin @@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager 1. Choose *Create parameter*. 1. Create a parameter with the following values: - * `Name`: otel-collector-config - * `Tier`: Standard - * `Type`: String - * `Data type`: Text - * `Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. + * Name: `otel-collector-config` + * Tier: `Standard` + * Type: `String` + * Data type: `Text` + * Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure]. ### Run your task @@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet 1. Download the [ECS Fargate task definition template][template] from GitHub. 1. Edit the task definition template and add the following parameters. - * `{{region}}`: The region the data is sent to. + * `{{region}}`: The region to send the data to. * `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN. * `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN. * `command` - Assign a value to the command variable to select the path to the configuration file. The AWS Collector comes with two configurations. Select one of them based on your environment: - * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces. - * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics. + * Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces. + * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. -## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar +## Run Alloy directly in your instance, or as a Kubernetes sidecar SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate. diff --git a/docs/sources/collect/logs-in-kubernetes.md b/docs/sources/collect/logs-in-kubernetes.md index 3e02efa808..d8b8b17fb2 100644 --- a/docs/sources/collect/logs-in-kubernetes.md +++ b/docs/sources/collect/logs-in-kubernetes.md @@ -19,19 +19,19 @@ This topic describes how to: ## Components used in this topic -* [discovery.kubernetes][] -* [discovery.relabel][] -* [local.file_match][] -* [loki.source.file][] -* [loki.source.kubernetes][] -* [loki.source.kubernetes_events][] -* [loki.process][] -* [loki.write][] +* [`discovery.kubernetes`][discovery.kubernetes] +* [`discovery.relabel`][discovery.relabel] +* [`local.file_match`][local.file_match] +* [`loki.source.file`][loki.source.file] +* [`loki.source.kubernetes`][loki.source.kubernetes] +* [`loki.source.kubernetes_events`][loki.source.kubernetes_events] +* [`loki.process`][loki.process] +* [`loki.write`][loki.write] ## Before you begin * Ensure that you are familiar with logs labelling when working with Loki. -* Identify where you will write collected logs. +* Identify where to write collected logs. You can write logs to Loki endpoints such as Grafana Loki, Grafana Cloud, or Grafana Enterprise Logs. * Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. @@ -39,8 +39,8 @@ This topic describes how to: Before components can collect logs, you must have a component responsible for writing those logs somewhere. -The [loki.write][] component delivers logs to a Loki endpoint. -After a `loki.write` component is defined, you can use other {{< param "PRODUCT_NAME" >}} components to forward logs to it. +The [`loki.write`][loki.write] component delivers logs to a Loki endpoint. +After you define a `loki.write` component, you can use other {{< param "PRODUCT_NAME" >}} components to forward logs to it. To configure a `loki.write` component for logs delivery, complete the following steps: @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following Replace the following: - - _`