Skip to content

Commit

Permalink
compression test fix and suggestions
Browse files Browse the repository at this point in the history
  • Loading branch information
EHSchmitt4395 committed Nov 29, 2024
2 parents 91eb876 + d82f44b commit 1a6fd51
Show file tree
Hide file tree
Showing 81 changed files with 2,965 additions and 675 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@915b19bbe73b92a6cf82a1bc12b087c9a19a5fe2
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0
with:
image-ref: 'grafana/alloy-dev:latest'
format: 'template'
Expand Down
17 changes: 15 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,6 @@ Main (unreleased)

- Add `otelcol.exporter.syslog` component to export logs in syslog format (@dehaansa)

- Add `otelcol.receiver.influxdb` to convert influx metric into OTEL. (@EHSchmitt4395)

### Enhancements

- Add second metrics sample to the support bundle to provide delta information (@dehaansa)
Expand All @@ -32,9 +30,22 @@ Main (unreleased)

- Add relevant golang environment variables to the support bundle (@dehaansa)

- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr)

- Update mysqld_exporter from v0.15.0 to v0.16.0 (including 2ef168bf6), most notable changes: (@cristiangreco)
- Support MySQL 8.4 replicas syntax
- Fetch lock time and cpu time from performance schema
- Fix fetching tmpTables vs tmpDiskTables from performance_schema
- Skip SPACE_TYPE column for MariaDB >=10.5
- Fixed parsing of timestamps with non-zero padded days
- Fix auto_increment metric collection errors caused by using collation in INFORMATION_SCHEMA searches
- Change processlist query to support ONLY_FULL_GROUP_BY sql_mode
- Add perf_schema quantile columns to collector

### Bugfixes

- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa)

- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar)

- Fixed issue with reloading configuration and prometheus metrics duplication in `prometheus.write.queue`. (@mattdurham)
Expand Down Expand Up @@ -106,6 +117,8 @@ v1.5.0

- Add `proxy_url` to `otelcol.exporter.otlphttp`. (@wildum)

- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig)

### Bugfixes

- Fixed a bug in `import.git` which caused a `"non-fast-forward update"` error message. (@ptodev)
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/collect/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ weight: 100

# Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}}

{{< section >}}
{{< section >}}
13 changes: 6 additions & 7 deletions docs/sources/collect/choose-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want
## Metrics for infrastructure

Use `prometheus.*` components to collect infrastructure metrics.
This will give you the best experience with [Grafana Infrastructure Observability][].
This gives you the best experience with [Grafana Infrastructure Observability][].

For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.

You can also scrape any Prometheus endpoint using `prometheus.scrape`.
Use `discovery.*` components to find targets for `prometheus.scrape`.
Expand All @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`.
## Metrics for applications

Use `otelcol.receiver.*` components to collect application metrics.
This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.

For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications.

Expand All @@ -48,12 +47,12 @@ with logs collected by `loki.*` components.

For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`.
On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`,
which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem.
which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem.

## Logs from applications

Use `otelcol.receiver.*` components to collect application logs.
This will gather the application logs in an OpenTelemetry-native way, making it easier to
This gathers the application logs in an OpenTelemetry-native way, making it easier to
correlate the logs with OpenTelemetry metrics and traces coming from the application.
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.

Expand All @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri

Use `otelcol.receiver.*` components to collect traces.

If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.

## Profiles

Expand Down
42 changes: 20 additions & 22 deletions docs/sources/collect/datadog-traces-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ This topic describes how to:

## Before you begin

* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
* Identify where you will write the collected telemetry.
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
* Identify where to write the collected telemetry.
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

Expand All @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
* _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

```alloy
otelcol.processor.deltatocumulative "default" {
max_stale = <MAX_STALE>
max_stale = "<MAX_STALE>"
max_streams = <MAX_STREAMS>
output {
metrics = [otelcol.processor.batch.default.input]
Expand All @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
* _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
* _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.

1. Add the following `otelcol.receiver.datadog` component to your configuration file.

```alloy
otelcol.receiver.datadog "default" {
endpoint = <HOST>:<PORT>
endpoint = "<HOST>:<PORT>"
output {
metrics = [otelcol.processor.deltatocumulative.default.input]
traces = [otelcol.processor.batch.default.input]
Expand All @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<HOST>`_: The host address where the receiver will listen.
- _`<PORT>`_: The port where the receiver will listen.
* _`<HOST>`_: The host address where the receiver listens.
* _`<PORT>`_: The port where the receiver listens.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -139,19 +139,19 @@ We recommend this approach for current Datadog users who want to try using {{< p

Replace the following:

- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
* _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
* _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.

Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
You can do this by setting up your Datadog Agent in the following way:

1. Replace the DD_URL in the configuration YAML:

```yaml
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
```
Or by setting an environment variable:
Or by setting an environment variable:
```bash
DD_DD_URL='{"http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>": ["datadog-receiver"]}'
Expand All @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit
[Datadog]: https://www.datadoghq.com/
[Datadog Agent]: https://docs.datadoghq.com/agent/
[Prometheus]: https://prometheus.io
[OTLP]: https://opentelemetry.io/docs/specs/otlp/
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[Components]: ../../get-started/components
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/
[Components]: ../../get-started/components/
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
---
canonical: https://grafana.com/docs/alloy/latest/collect/ecs-opentelemetry-data/
alias:
- ./ecs-openteletry-data/ # /docs/alloy/latest/collect/ecs-openteletry-data/
description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint
menuTitle: Collect ECS or Fargate OpenTelemetry data
title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data
Expand All @@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle

1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store).
1. [Create an ECS task definition](#create-an-ecs-task-definition).
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar).
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar)

## Before you begin

Expand Down Expand Up @@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager
1. Choose *Create parameter*.
1. Create a parameter with the following values:

* `Name`: otel-collector-config
* `Tier`: Standard
* `Type`: String
* `Data type`: Text
* `Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
* Name: `otel-collector-config`
* Tier: `Standard`
* Type: `String`
* Data type: `Text`
* Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].

### Run your task

Expand All @@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet

1. Download the [ECS Fargate task definition template][template] from GitHub.
1. Edit the task definition template and add the following parameters.
* `{{region}}`: The region the data is sent to.
* `{{region}}`: The region to send the data to.
* `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
* `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
* `command` - Assign a value to the command variable to select the path to the configuration file.
The AWS Collector comes with two configurations. Select one of them based on your environment:
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces.
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics.
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar
## Run Alloy directly in your instance, or as a Kubernetes sidecar

SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.

Expand Down
Loading

0 comments on commit 1a6fd51

Please sign in to comment.