Skip to content

Commit

Permalink
Clean up some of the linting warnings and errors (#2155)
Browse files Browse the repository at this point in the history
* Clean up some of the linting warnings and errors

* Additional linting warning and error cleanup

* More work on removing linting errors

* More linting cleanup

* Even more linting warning cleanup

* Fix links to components

* Fix link syntax in topic

* Correct reference to AWS X-Ray

* Add missing link in collect topic

* Fix up some redirected links and minor syntax fixes

* Fix typo in file name

* Apply suggestions from code review

Co-authored-by: Beverly Buchanan <[email protected]>

---------

Co-authored-by: Beverly Buchanan <[email protected]>
(cherry picked from commit c27c8ac)
  • Loading branch information
clayton-cornell committed Nov 27, 2024
1 parent 055797e commit 1084eb8
Show file tree
Hide file tree
Showing 51 changed files with 541 additions and 562 deletions.
2 changes: 1 addition & 1 deletion docs/sources/collect/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ weight: 100

# Collect and forward data with {{% param "FULL_PRODUCT_NAME" %}}

{{< section >}}
{{< section >}}
13 changes: 6 additions & 7 deletions docs/sources/collect/choose-component.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want
## Metrics for infrastructure

Use `prometheus.*` components to collect infrastructure metrics.
This will give you the best experience with [Grafana Infrastructure Observability][].
This gives you the best experience with [Grafana Infrastructure Observability][].

For example, you can get metrics for a Linux host using `prometheus.exporter.unix`,
and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.
For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`.

You can also scrape any Prometheus endpoint using `prometheus.scrape`.
Use `discovery.*` components to find targets for `prometheus.scrape`.
Expand All @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`.
## Metrics for applications

Use `otelcol.receiver.*` components to collect application metrics.
This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.
This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native.

For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications.

Expand All @@ -48,12 +47,12 @@ with logs collected by `loki.*` components.

For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`.
On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`,
which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem.
which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem.

## Logs from applications

Use `otelcol.receiver.*` components to collect application logs.
This will gather the application logs in an OpenTelemetry-native way, making it easier to
This gathers the application logs in an OpenTelemetry-native way, making it easier to
correlate the logs with OpenTelemetry metrics and traces coming from the application.
All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation.

Expand All @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri

Use `otelcol.receiver.*` components to collect traces.

If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.
If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically.

## Profiles

Expand Down
42 changes: 20 additions & 22 deletions docs/sources/collect/datadog-traces-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ This topic describes how to:

## Before you begin

* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces.
* Identify where you will write the collected telemetry.
Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces.
* Identify where to write the collected telemetry.
Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics.
Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces.
* Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}.

Expand All @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.
* _`<OTLP_ENDPOINT_URL>`_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

```alloy
otelcol.processor.deltatocumulative "default" {
max_stale = <MAX_STALE>
max_stale = "<MAX_STALE>"
max_streams = <MAX_STREAMS>
output {
metrics = [otelcol.processor.batch.default.input]
Expand All @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
- _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.
* _`<MAX_STALE>`_: How long until a series not receiving new samples is removed, such as "5m".
* _`<MAX_STREAMS>`_: The upper limit of streams to track. New streams exceeding this limit are dropped.

1. Add the following `otelcol.receiver.datadog` component to your configuration file.

```alloy
otelcol.receiver.datadog "default" {
endpoint = <HOST>:<PORT>
endpoint = "<HOST>:<PORT>"
output {
metrics = [otelcol.processor.deltatocumulative.default.input]
traces = [otelcol.processor.batch.default.input]
Expand All @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<HOST>`_: The host address where the receiver will listen.
- _`<PORT>`_: The port where the receiver will listen.
* _`<HOST>`_: The host address where the receiver listens.
* _`<PORT>`_: The port where the receiver listens.

1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block.

Expand All @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data

Replace the following:

- _`<USERNAME>`_: The basic authentication username.
- _`<PASSWORD>`_: The basic authentication password or API key.
* _`<USERNAME>`_: The basic authentication username.
* _`<PASSWORD>`_: The basic authentication password or API key.

## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver

Expand All @@ -139,19 +139,19 @@ We recommend this approach for current Datadog users who want to try using {{< p

Replace the following:

- _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
- _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.
* _`<DATADOG_RECEIVER_HOST>`_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found.
* _`<DATADOG_RECEIVER_PORT>`_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed.

Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}.
You can do this by setting up your Datadog Agent in the following way:

1. Replace the DD_URL in the configuration YAML:

```yaml
dd_url: http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>
```
Or by setting an environment variable:
Or by setting an environment variable:
```bash
DD_DD_URL='{"http://<DATADOG_RECEIVER_HOST>:<DATADOG_RECEIVER_PORT>": ["datadog-receiver"]}'
Expand All @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit
[Datadog]: https://www.datadoghq.com/
[Datadog Agent]: https://docs.datadoghq.com/agent/
[Prometheus]: https://prometheus.io
[OTLP]: https://opentelemetry.io/docs/specs/otlp/
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp
[Components]: ../../get-started/components
[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/
[Components]: ../../get-started/components/
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
---
canonical: https://grafana.com/docs/alloy/latest/collect/ecs-opentelemetry-data/
alias:
- ./ecs-openteletry-data/ # /docs/alloy/latest/collect/ecs-openteletry-data/
description: Learn how to collect Amazon ECS or AWS Fargate OpenTelemetry data and forward it to any OpenTelemetry-compatible endpoint
menuTitle: Collect ECS or Fargate OpenTelemetry data
title: Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data
Expand All @@ -14,7 +16,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle

1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store).
1. [Create an ECS task definition](#create-an-ecs-task-definition).
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar).
1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar)

## Before you begin

Expand Down Expand Up @@ -55,11 +57,11 @@ In ECS, you can set the values of environment variables from AWS Systems Manager
1. Choose *Create parameter*.
1. Create a parameter with the following values:

* `Name`: otel-collector-config
* `Tier`: Standard
* `Type`: String
* `Data type`: Text
* `Value`: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].
* Name: `otel-collector-config`
* Tier: `Standard`
* Type: `String`
* Data type: `Text`
* Value: Copy and paste your custom OpenTelemetry configuration file or [{{< param "PRODUCT_NAME" >}} configuration file][configure].

### Run your task

Expand All @@ -73,16 +75,16 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet

1. Download the [ECS Fargate task definition template][template] from GitHub.
1. Edit the task definition template and add the following parameters.
* `{{region}}`: The region the data is sent to.
* `{{region}}`: The region to send the data to.
* `{{ecsTaskRoleArn}}`: The AWSOTTaskRole ARN.
* `{{ecsExecutionRoleArn}}`: The AWSOTTaskExcutionRole ARN.
* `command` - Assign a value to the command variable to select the path to the configuration file.
The AWS Collector comes with two configurations. Select one of them based on your environment:
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and X-Ray SDK traces.
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics.
* Use `--config=/etc/ecs/ecs-default-config.yaml` to consume StatsD metrics, OTLP metrics and traces, and AWS X-Ray SDK traces.
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar
## Run Alloy directly in your instance, or as a Kubernetes sidecar

SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.

Expand Down
Loading

0 comments on commit 1084eb8

Please sign in to comment.