From fc97b4866d8956a1b7ca77b8f938732556985387 Mon Sep 17 00:00:00 2001 From: Clayton Cornell Date: Wed, 20 Nov 2024 15:39:53 -0800 Subject: [PATCH 01/12] Clean up some of the linting warnings and errors --- docs/sources/set-up/install/_index.md | 2 +- docs/sources/set-up/install/ansible.md | 12 ++++++------ docs/sources/set-up/install/binary.md | 10 +++++----- docs/sources/set-up/install/chef.md | 2 +- docs/sources/set-up/install/kubernetes.md | 13 ++++++------- docs/sources/set-up/install/linux.md | 13 +++++++++++-- docs/sources/set-up/install/macos.md | 4 ++-- docs/sources/set-up/install/puppet.md | 8 ++++---- docs/sources/set-up/install/windows.md | 4 ++-- 9 files changed, 38 insertions(+), 30 deletions(-) diff --git a/docs/sources/set-up/install/_index.md b/docs/sources/set-up/install/_index.md index f28886195d..38749a7610 100644 --- a/docs/sources/set-up/install/_index.md +++ b/docs/sources/set-up/install/_index.md @@ -12,7 +12,7 @@ weight: 100 You can install {{< param "PRODUCT_NAME" >}} on Docker, Kubernetes, Linux, macOS, or Windows. -The following architectures are supported: +{{< param "PRODUCT_NAME" >}} supports the following architectures: - **Linux**: AMD64, ARM64 - **Windows**: AMD64 diff --git a/docs/sources/set-up/install/ansible.md b/docs/sources/set-up/install/ansible.md index aad2d5c7f2..cea08df291 100644 --- a/docs/sources/set-up/install/ansible.md +++ b/docs/sources/set-up/install/ansible.md @@ -10,12 +10,12 @@ weight: 550 # Install or uninstall {{% param "FULL_PRODUCT_NAME" %}} using Ansible -You can use the [Grafana Ansible Collection](https://github.com/grafana/grafana-ansible-collection) to install and manage {{< param "PRODUCT_NAME" >}} on Linux hosts. +You can use the [Grafana Ansible Collection][collection] to install and manage {{< param "PRODUCT_NAME" >}} on Linux hosts. ## Before you begin -- These steps assume you already have a working [Ansible][] setup and a pre-existing inventory. -- You can add the tasks below to any new or existing role. +- These steps assume you already have a working [Ansible][] setup and an inventory. +- You can add the tasks below to any role. ## Steps @@ -61,12 +61,12 @@ To add {{% param "PRODUCT_NAME" %}} to a host: To verify that the {{< param "PRODUCT_NAME" >}} service on the target machine is `active` and `running`, open a terminal window and run the following command: ```shell -$ sudo systemctl status alloy.service +sudo systemctl status alloy.service ``` If the service is `active` and `running`, the output should look similar to this: -``` +```shell alloy.service - Grafana Alloy Loaded: loaded (/etc/systemd/system/alloy.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2022-07-20 09:56:15 UTC; 36s ago @@ -82,6 +82,6 @@ Main PID: 3176 (alloy-linux-amd) - [Configure {{< param "PRODUCT_NAME" >}}][Configure] -[Grafana Ansible Collection]: https://github.com/grafana/grafana-ansible-collection +[collection]: https://github.com/grafana/grafana-ansible-collection [Ansible]: https://www.ansible.com/ [Configure]: ../../../configure/linux/ diff --git a/docs/sources/set-up/install/binary.md b/docs/sources/set-up/install/binary.md index 4f0054778c..73811e5472 100644 --- a/docs/sources/set-up/install/binary.md +++ b/docs/sources/set-up/install/binary.md @@ -25,7 +25,7 @@ To download {{< param "PRODUCT_NAME" >}} as a standalone binary, perform the fol 1. Scroll down to the **Assets** section. -1. Download the `alloy` zip file that matches your operating system and machine's architecture. +1. Download the `alloy` file that matches your operating system and machine's architecture. 1. Extract the package contents into a directory. @@ -36,7 +36,8 @@ To download {{< param "PRODUCT_NAME" >}} as a standalone binary, perform the fol ``` Replace the following: - - _``_: The path to the extracted binary. + + * _``_: The path to the extracted binary. ### BoringCrypto binaries @@ -44,9 +45,8 @@ To download {{< param "PRODUCT_NAME" >}} as a standalone binary, perform the fol BoringCrypto support is in _Public preview_ and is only available for Linux with the AMD64 or ARM64 architecture. {{< /admonition >}} -BoringCrypto binaries are published for Linux on AMD64 and ARM64 platforms. To -retrieve them, follow the steps above but search the `alloy-boringcrypto` ZIP -file that matches your Linux architecture. +BoringCrypto binaries are published for Linux on AMD64 and ARM64 platforms. +To retrieve them, follow the steps above but search for the `alloy-boringcrypto` file that matches your Linux architecture. ## Next steps diff --git a/docs/sources/set-up/install/chef.md b/docs/sources/set-up/install/chef.md index a62d0eed9b..ef8cbdb32b 100644 --- a/docs/sources/set-up/install/chef.md +++ b/docs/sources/set-up/install/chef.md @@ -15,7 +15,7 @@ You can use Chef to install and manage {{< param "PRODUCT_NAME" >}}. ## Before you begin - These steps assume you already have a working [Chef][] setup. -- You can add the following resources to any new or existing recipe. +- You can add the following resources to any recipe. - These tasks install {{< param "PRODUCT_NAME" >}} from the package repositories. The tasks target Linux systems from the following families: - Debian (including Ubuntu) diff --git a/docs/sources/set-up/install/kubernetes.md b/docs/sources/set-up/install/kubernetes.md index e0ae25d5b8..f0886f45fe 100644 --- a/docs/sources/set-up/install/kubernetes.md +++ b/docs/sources/set-up/install/kubernetes.md @@ -42,7 +42,7 @@ To deploy {{< param "PRODUCT_NAME" >}} on Kubernetes using Helm, run the followi Replace the following: - - _``_: The namespace to use for your {{< param "PRODUCT_NAME" >}} installation, such as `alloy`. + * _``_: The namespace to use for your {{< param "PRODUCT_NAME" >}} installation, such as `alloy`. 1. Install {{< param "PRODUCT_NAME" >}}: @@ -52,8 +52,8 @@ To deploy {{< param "PRODUCT_NAME" >}} on Kubernetes using Helm, run the followi Replace the following: - - _``_: The namespace created in the previous step. - - _``_: The name to use for your {{< param "PRODUCT_NAME" >}} installation, such as `alloy`. + * _``_: The namespace created in the previous step. + * _``_: The name to use for your {{< param "PRODUCT_NAME" >}} installation, such as `alloy`. 1. Verify that the {{< param "PRODUCT_NAME" >}} pods are running: @@ -63,16 +63,15 @@ To deploy {{< param "PRODUCT_NAME" >}} on Kubernetes using Helm, run the followi Replace the following: - - _``_: The namespace used in the previous step. + * _``_: The namespace used in the previous step. You have successfully deployed {{< param "PRODUCT_NAME" >}} on Kubernetes, using default Helm settings. ## Next steps -- [Configure {{< param "PRODUCT_NAME" >}}][Configure] +* [Configure {{< param "PRODUCT_NAME" >}}][Configure] - + [Helm]: https://helm.sh -[Artifact Hub]: https://artifacthub.io/packages/helm/grafana/alloy [Configure]: ../../../configure/kubernetes/ diff --git a/docs/sources/set-up/install/linux.md b/docs/sources/set-up/install/linux.md index 65cb57bd58..c09fef53c5 100644 --- a/docs/sources/set-up/install/linux.md +++ b/docs/sources/set-up/install/linux.md @@ -28,6 +28,7 @@ To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in 1. Import the GPG key and add the Grafana package repository. {{< code >}} + ```debian-ubuntu sudo mkdir -p /etc/apt/keyrings/ wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null @@ -37,8 +38,7 @@ To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in ```rhel-fedora wget -q -O gpg.key https://rpm.grafana.com/gpg.key sudo rpm --import gpg.key - echo -e '[grafana]\nname=grafana\nbaseurl=https://rpm.grafana.com\nrepo_gpgcheck=1\nenabled=1\ngpgcheck=1\ngpgkey=https://rpm.grafana.com/gpg.key\nsslverify=1 -sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo + echo -e '[grafana]\nname=grafana\nbaseurl=https://rpm.grafana.com\nrepo_gpgcheck=1\nenabled=1\ngpgcheck=1\ngpgkey=https://rpm.grafana.com/gpg.key\nsslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo ``` ```suse-opensuse @@ -46,11 +46,13 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. sudo rpm --import gpg.key sudo zypper addrepo https://rpm.grafana.com grafana ``` + {{< /code >}} 1. Update the repositories. {{< code >}} + ```debian-ubuntu sudo apt-get update ``` @@ -62,11 +64,13 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. ```suse-opensuse sudo zypper update ``` + {{< /code >}} 1. Install {{< param "PRODUCT_NAME" >}}. {{< code >}} + ```debian-ubuntu sudo apt-get install alloy ``` @@ -78,6 +82,7 @@ sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana. ```suse-opensuse sudo zypper install alloy ``` + {{< /code >}} ## Uninstall @@ -93,6 +98,7 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i 1. Uninstall {{< param "PRODUCT_NAME" >}}. {{< code >}} + ```debian-ubuntu sudo apt-get remove alloy ``` @@ -104,11 +110,13 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i ```suse-opensuse sudo zypper remove alloy ``` + {{< /code >}} 1. Optional: Remove the Grafana repository. {{< code >}} + ```debian-ubuntu sudo rm -i /etc/apt/sources.list.d/grafana.list ``` @@ -120,6 +128,7 @@ To uninstall {{< param "PRODUCT_NAME" >}} on Linux, run the following commands i ```suse-opensuse sudo zypper removerepo grafana ``` + {{< /code >}} ## Next steps diff --git a/docs/sources/set-up/install/macos.md b/docs/sources/set-up/install/macos.md index c57e3186e3..437a12137e 100644 --- a/docs/sources/set-up/install/macos.md +++ b/docs/sources/set-up/install/macos.md @@ -65,8 +65,8 @@ brew uninstall grafana/grafana/alloy ## Next steps -- [Run {{< param "PRODUCT_NAME" >}}][Run] -- [Configure {{< param "PRODUCT_NAME" >}}][Configure] +* [Run {{< param "PRODUCT_NAME" >}}][Run] +* [Configure {{< param "PRODUCT_NAME" >}}][Configure] [Homebrew]: https://brew.sh [Run]: ../../run/macos/ diff --git a/docs/sources/set-up/install/puppet.md b/docs/sources/set-up/install/puppet.md index 236fecb6e9..2624b61e9c 100644 --- a/docs/sources/set-up/install/puppet.md +++ b/docs/sources/set-up/install/puppet.md @@ -15,7 +15,7 @@ You can use Puppet to install and manage {{< param "PRODUCT_NAME" >}}. ## Before you begin - These steps assume you already have a working [Puppet][] setup. -- You can add the following manifest to any new or existing module. +- You can add the following manifest to any module. - The manifest installs {{< param "PRODUCT_NAME" >}} from the package repositories. It targets Linux systems from the following families: - Debian (including Ubuntu) - RedHat Enterprise Linux (including Fedora) @@ -37,7 +37,7 @@ To add {{< param "PRODUCT_NAME" >}} to a host: } ``` -1. Create a new [Puppet][] manifest with the following class to add the Grafana package repositories, install the `alloy` package, and run the service: +1. Create a [Puppet][] manifest with the following class to add the Grafana package repositories, install the `alloy` package, and run the service: ```ruby class grafana_alloy::grafana_alloy () { @@ -96,11 +96,11 @@ To add {{< param "PRODUCT_NAME" >}} to a host: The `alloy` package installs a default configuration file that doesn't send telemetry anywhere. The default configuration file location is `/etc/alloy/config.alloy`. -You can replace this file with your own configuration, or create a new configuration file for the service to use. +You can replace this file with your own configuration, or create a configuration file for the service to use. ## Next steps -- [Configure {{< param "PRODUCT_NAME" >}}][Configure] +* [Configure {{< param "PRODUCT_NAME" >}}][Configure] [Puppet]: https://www.puppet.com/ [Configure]: ../../../configure/linux/ diff --git a/docs/sources/set-up/install/windows.md b/docs/sources/set-up/install/windows.md index 00f2d67232..98c6dbf418 100644 --- a/docs/sources/set-up/install/windows.md +++ b/docs/sources/set-up/install/windows.md @@ -22,7 +22,7 @@ To do a standard graphical install of {{< param "PRODUCT_NAME" >}} on Windows, p 1. Download the file called `alloy-installer-windows-amd64.exe.zip`. -1. Unzip the downloaded file. +1. Extract the downloaded file. 1. Double-click on `alloy-installer-windows-amd64.exe` to install {{< param "PRODUCT_NAME" >}}. @@ -38,7 +38,7 @@ To do a silent install of {{< param "PRODUCT_NAME" >}} on Windows, perform the f 1. Download the file called `alloy-installer-windows-amd64.exe.zip`. -1. Unzip the downloaded file. +1. Extract the downloaded file. 1. Run the following command in PowerShell or Command Prompt: From 2027b0a03e4cc3d9c8410646931f80f64e9e636f Mon Sep 17 00:00:00 2001 From: Clayton Cornell Date: Wed, 20 Nov 2024 16:01:55 -0800 Subject: [PATCH 02/12] Additional linting warning and error cleanup --- docs/sources/set-up/migrate/from-flow.md | 48 ++++++++----------- docs/sources/set-up/migrate/from-operator.md | 7 ++- docs/sources/set-up/migrate/from-otelcol.md | 47 +++++++++--------- .../sources/set-up/migrate/from-prometheus.md | 34 +++++++------ docs/sources/set-up/migrate/from-promtail.md | 30 ++++++------ docs/sources/set-up/migrate/from-static.md | 15 +++--- 6 files changed, 86 insertions(+), 95 deletions(-) diff --git a/docs/sources/set-up/migrate/from-flow.md b/docs/sources/set-up/migrate/from-flow.md index f25459cda8..69c98a67e2 100644 --- a/docs/sources/set-up/migrate/from-flow.md +++ b/docs/sources/set-up/migrate/from-flow.md @@ -29,16 +29,16 @@ If you want a fresh start with {{< param "PRODUCT_NAME" >}}, you can [uninstall ## Differences between Grafana Agent Flow and {{% param "PRODUCT_NAME" %}} -* Only functionality marked _Generally available_ may be used by default. +By default, you can only use functionality marked _Generally available_. + You can enable functionality in _Experimental_ and _Public preview_ by setting the `--stability.level` flag in [run]. + * The default value of `--storage.path` has changed from `data-agent/` to `data-alloy/`. * The default value of `--server.http.memory-addr` has changed from `agent.internal:12345` to `alloy.internal:12345`. * Debug metrics reported by {{% param "PRODUCT_NAME" %}} are prefixed with `alloy_` instead of `agent_`. * The "classic modules", `module.file`, `module.git`, `module.http`, and `module.string` have been removed in favor of import configuration blocks. * The `prometheus.exporter.vsphere` component has been replaced by the `otelcol.receiver.vcenter` component. -[run]: ../../../reference/cli/run - ## Steps ### Prepare your Grafana Agent Flow configuration @@ -48,10 +48,10 @@ You can enable functionality in _Experimental_ and _Public preview_ by setting t Before migrating, modify your Grafana Agent Flow configuration to remove or replace any unsupported components: * The "classic modules" in Grafana Agent Flow have been removed in favor of the modules introduced in v0.40: - * `module.file` is replaced by the [import.file] configuration block. - * `module.git` is replaced by the [import.git] configuration block. - * `module.http` is replaced by the [import.http] configuration block. - * `module.string` is replaced by the [import.string] configuration block. + * `module.file` is replaced by the [import.file] configuration block. + * `module.git` is replaced by the [import.git] configuration block. + * `module.http` is replaced by the [import.http] configuration block. + * `module.string` is replaced by the [import.string] configuration block. * `prometheus.exporter.vsphere` is replaced by the [otelcol.receiver.vcenter] component. [import.file]: ../../../reference/config-blocks/import.file/ @@ -66,26 +66,17 @@ Follow the [installation instructions][install] for {{< param "PRODUCT_NAME" >}} When deploying {{< param "PRODUCT_NAME" >}}, be aware of the following settings: -- {{< param "PRODUCT_NAME" >}} should be deployed with topology that's the same as Grafana Agent Flow. +* {{< param "PRODUCT_NAME" >}} should be deployed with topology that's the same as Grafana Agent Flow. The CPU, and storage limits should match. -- Custom command-line flags configured in Grafana Agent Flow should be reflected in your {{< param "PRODUCT_NAME" >}} installation. -- {{< param "PRODUCT_NAME" >}} may need to be deployed with the `--stability.level` flag in [run] to enable non-stable components: - - Set `--stability.level` to `experimental` if you are using the following component: - - [otelcol.receiver.vcenter] - - Otherwise, `--stability.level` may be omitted or set to the default value (`generally-available`). -- When installing on Kubernetes, update your `values.yaml` file to rename the `agent` key to `alloy`. -- If you are deploying {{< param "PRODUCT_NAME" >}} as a cluster: - - Set the number of instances to match the number of instances in your Grafana Agent Flow cluster. - - Don't enable auto-scaling until the migration is complete. - -[install]: ../../../set-up/install -[run]: ../../../reference/cli/run -[discovery.process]: ../../../reference/components/discovery.process/ -[pyroscope.ebpf]: ../../../reference/components/pyroscope.ebpf/ -[pyroscope.java]: ../../../reference/components/pyroscope.java/ -[pyroscope.scrape]: ../../../reference/components/pyroscope.scrape/ -[pyroscope.write]: ../../../reference/components/pyroscope.write/ -[otelcol.receiver.vcenter]: ../../../reference/components/otelcol/otelcol.receiver.vcenter/ +* Custom command-line flags configured in Grafana Agent Flow should be reflected in your {{< param "PRODUCT_NAME" >}} installation. +* {{< param "PRODUCT_NAME" >}} may need to be deployed with the `--stability.level` flag in [run] to enable non-stable components: + * Set `--stability.level` to `experimental` if you are using the following component: + * [otelcol.receiver.vcenter] + * Otherwise, `--stability.level` may be omitted or set to the default value (`generally-available`). +* When installing on Kubernetes, update your `values.yaml` file to rename the `agent` key to `alloy`. +* If you are deploying {{< param "PRODUCT_NAME" >}} as a cluster: + * Set the number of instances to match the number of instances in your Grafana Agent Flow cluster. + * Don't enable auto-scaling until the migration is complete. ### Migrate Grafana Agent Flow data to {{% param "PRODUCT_NAME" %}} @@ -96,7 +87,7 @@ Migrate your Grafana Agent Flow data to {{< param "PRODUCT_NAME" >}} by copying * Windows installations: copy the _contents_ of `%ProgramData%\Grafana Agent Flow\data` to `%ProgramData%\GrafanaLabs\Alloy\data`. * Docker: copy the contents of mounted volumes to a new directory, and then mount the new directory when running {{% param "PRODUCT_NAME" %}}. * Kubernetes: use `kubectl cp` to copy the _contents_ of the data directory on Flow pods to the data directory on {{% param "PRODUCT_NAME" %}} pods. - * The data directory is determined by the `agent.storagePath` (default `/tmp/agent`) and `alloy.storagePath` (default `/tmp/alloy`) fields in `values.yaml`. + * The data directory is determined by the `agent.storagePath` (default `/tmp/agent`) and `alloy.storagePath` (default `/tmp/alloy`) fields in `values.yaml`. ### Migrate pipelines that receive data over the network @@ -126,3 +117,6 @@ After you have completed the migration, you can uninstall Grafana Agent Flow. ### Cleanup temporary changes You can enable auto-scaling in your {{< param "PRODUCT_NAME" >}} deployment if you disabled it during the migration process. + +[install]: ../../../set-up/install +[run]: ../../../reference/cli/run diff --git a/docs/sources/set-up/migrate/from-operator.md b/docs/sources/set-up/migrate/from-operator.md index 29e714ede7..a4227a2b8a 100644 --- a/docs/sources/set-up/migrate/from-operator.md +++ b/docs/sources/set-up/migrate/from-operator.md @@ -44,7 +44,7 @@ You can migrate from Grafana Agent Operator to {{< param "PRODUCT_NAME" >}}. 1. Install the Grafana Helm repository: - ``` + ```shell helm repo add grafana https://grafana.github.io/helm-charts helm repo update ``` @@ -121,7 +121,7 @@ Refer to the documentation for the relevant components for additional informatio - [prometheus.operator.probes][] - [prometheus.scrape][] -## Collecting logs +## Collect logs The current recommendation is to create an additional DaemonSet deployment of {{< param "PRODUCT_NAME" >}} to scrape logs. @@ -145,7 +145,7 @@ alloy: This command installs a release named `alloy-logs` in the `monitoring` namespace: -``` +```shell helm upgrade alloy-logs grafana/alloy -i -n monitoring -f values-logs.yaml --set-file alloy.configMap.content=config-logs.alloy ``` @@ -277,7 +277,6 @@ The [reference documentation][component documentation] should help convert those [clustering]: ../../../get-started/clustering/ [deployment guide]: ../../../set-up/deploy/ [operator guide]: https://grafana.com/docs/agent/latest/operator/deploy-agent-operator-resources/#deploy-a-metricsinstance-resource -[Helm chart]: ../../../set-up/install/kubernetes/ [remote.kubernetes.secret]: ../../../reference/components/remote/remote.kubernetes.secret/ [prometheus.remote_write]: ../../../reference/components/prometheus/prometheus.remote_write/ [prometheus.operator.podmonitors]: ../../../reference/components/prometheus/prometheus.operator.podmonitors/ diff --git a/docs/sources/set-up/migrate/from-otelcol.md b/docs/sources/set-up/migrate/from-otelcol.md index 678d0c6a34..f5ef089016 100644 --- a/docs/sources/set-up/migrate/from-otelcol.md +++ b/docs/sources/set-up/migrate/from-otelcol.md @@ -25,28 +25,27 @@ This topic describes how to: ## Before you begin -* You must have an existing OpenTelemetry Collector configuration. +* You must have an OpenTelemetry Collector configuration. * You must have a set of OpenTelemetry Collector applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. * You must be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Convert an OpenTelemetry Collector configuration To fully migrate your configuration from [OpenTelemetry Collector] to {{< param "PRODUCT_NAME" >}}, you must convert your OpenTelemetry Collector configuration into a {{< param "PRODUCT_NAME" >}} configuration. -This conversion will enable you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. +This conversion allows you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. -> In this task, you will use the [convert][] CLI command to output a {{< param "PRODUCT_NAME" >}} -> configuration from a OpenTelemetry Collector configuration. +In this task, you use the [convert][] CLI command to output a {{< param "PRODUCT_NAME" >}} configuration from a OpenTelemetry Collector configuration. 1. Open a terminal window and run the following command. - ``` + ```shell alloy convert --source-format=otelcol --output= ``` Replace the following: - - _``_: The full path to the OpenTelemetry Collector configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The full path to the OpenTelemetry Collector configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. [Run][] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: @@ -60,25 +59,26 @@ This conversion will enable you to take full advantage of the many additional fe Make sure you fully test the converted configuration before using it in a production environment. {{< /admonition >}} - ``` + ```shell alloy convert --source-format=otelcol --bypass-errors --output= ``` Replace the following: - - _``_: The full path to the OpenTelemetry Collector configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The full path to the OpenTelemetry Collector configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. You can also output a diagnostic report by including the `--report` flag. - ``` + ```shell alloy convert --source-format=otelcol --report= --output= ``` + Replace the following: - - _``_: The full path to the OpenTelemetry Collector configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - - _``_: The output path for the report. + * _``_: The full path to the OpenTelemetry Collector configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The output path for the report. Using the [example][] OpenTelemetry Collector configuration below, the diagnostic report provides the following information: @@ -92,17 +92,16 @@ This conversion will enable you to take full advantage of the many additional fe ## Run an OpenTelemetry Collector configuration -If you’re not ready to completely switch to a {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "FULL_PRODUCT_NAME" >}} using your existing OpenTelemetry Collector configuration. +If you're not ready to completely switch to a {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "FULL_PRODUCT_NAME" >}} using your OpenTelemetry Collector configuration. The `--config.format=otelcol` flag tells {{< param "FULL_PRODUCT_NAME" >}} to convert your OpenTelemetry Collector configuration to a {{< param "PRODUCT_NAME" >}} configuration and load it directly without saving the new configuration. -This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing OpenTelemetry Collector configuration infrastructure. +This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your OpenTelemetry Collector configuration infrastructure. -> In this task, you will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} -> using an OpenTelemetry Collector configuration. +In this task, you use the [run][run_cli] CLI command to run {{< param "PRODUCT_NAME" >}} using an OpenTelemetry Collector configuration. [Run][] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=otelcol`. Your configuration file must be a valid OpenTelemetry Collector configuration file rather than a {{< param "PRODUCT_NAME" >}} configuration file. -### Debugging +### Debug 1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. @@ -113,7 +112,7 @@ Your configuration file must be a valid OpenTelemetry Collector configuration fi {{< admonition type="caution" >}} If you bypass the errors, the behavior of the converted configuration may not match the original Prometheus configuration. - Do not use this flag in a production environment. + Don't use this flag in a production environment. {{< /admonition >}} ## Example @@ -157,14 +156,14 @@ service: The convert command takes the YAML file as input and outputs an {{< param "PRODUCT_NAME" >}} configuration file. -``` +```shell alloy convert --source-format=otelcol --output= ``` Replace the following: -- _``_: The full path to the OpenTelemetry Collector configuration. -- _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. +* _``_: The full path to the OpenTelemetry Collector configuration. +* _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: @@ -225,7 +224,7 @@ The following list is specific to the convert command and not {{< param "PRODUCT [Components]: ../../../get-started/components/ [Component Reference]: ../../../reference/components/ [convert]: ../../../reference/cli/convert/ -[run]: ../../../reference/cli/run/ +[run_cli]: ../../../reference/cli/run/ [Run]: ../../../get-started/run/ [DebuggingUI]: ../../../troubleshoot/debug/ [UI]: ../../../troubleshoot/debug/#alloy-ui diff --git a/docs/sources/set-up/migrate/from-prometheus.md b/docs/sources/set-up/migrate/from-prometheus.md index b0bbc1e9dd..855d2a71ae 100644 --- a/docs/sources/set-up/migrate/from-prometheus.md +++ b/docs/sources/set-up/migrate/from-prometheus.md @@ -24,17 +24,16 @@ This topic describes how to: ## Before you begin -* You must have an existing Prometheus configuration. +* You must have a Prometheus configuration. * You must have a set of Prometheus applications ready to push telemetry data to {{< param "PRODUCT_NAME" >}}. * You must be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. ## Convert a Prometheus configuration To fully migrate your configuration from [Prometheus] to {{< param "PRODUCT_NAME" >}}, you must convert your Prometheus configuration into an {{< param "PRODUCT_NAME" >}} configuration. -This conversion will enable you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. +This conversion allows you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. -> In this task, you will use the [convert][] CLI command to output an {{< param "PRODUCT_NAME" >}} -> configuration from a Prometheus configuration. +In this task, you use the [convert][] CLI command to output an {{< param "PRODUCT_NAME" >}} configuration from a Prometheus configuration. 1. Open a terminal window and run the following command. @@ -44,8 +43,8 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - - _``_: The full path to the Prometheus configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The full path to the Prometheus configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. [Run][] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: @@ -65,8 +64,8 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - - _``_: The full path to the Prometheus configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The full path to the Prometheus configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. You can also output a diagnostic report by including the `--report` flag. @@ -76,9 +75,9 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - - _``_: The full path to the Prometheus configuration. - - _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. - - _``_: The output path for the report. + * _``_: The full path to the Prometheus configuration. + * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. + * _``_: The output path for the report. Using the [example][] Prometheus configuration below, the diagnostic report provides the following information: @@ -91,17 +90,16 @@ This conversion will enable you to take full advantage of the many additional fe ## Run a Prometheus configuration -If you’re not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your existing Prometheus configuration. +If you're not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your Prometheus configuration. The `--config.format=prometheus` flag tells {{< param "PRODUCT_NAME" >}} to convert your Prometheus configuration to an {{< param "PRODUCT_NAME" >}} configuration and load it directly without saving the new configuration. -This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing Prometheus configuration infrastructure. +This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Prometheus configuration infrastructure. -> In this task, you will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} -> using a Prometheus configuration. +In this task, you use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Prometheus configuration. [Run][run alloy] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=prometheus`. Your configuration file must be a valid Prometheus configuration file rather than an {{< param "PRODUCT_NAME" >}} configuration file. -### Debugging +### Debug 1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. @@ -146,8 +144,8 @@ alloy convert --source-format=prometheus --output= `_: The full path to the Prometheus configuration. -- _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. +* _``_: The full path to the Prometheus configuration. +* _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: diff --git a/docs/sources/set-up/migrate/from-promtail.md b/docs/sources/set-up/migrate/from-promtail.md index fceaf88f04..37594b91e3 100644 --- a/docs/sources/set-up/migrate/from-promtail.md +++ b/docs/sources/set-up/migrate/from-promtail.md @@ -87,7 +87,7 @@ This conversion will enable you to take full advantage of the many additional fe ## Run a Promtail configuration -If you’re not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your existing Promtail configuration. +If you're not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your existing Promtail configuration. The `--config.format=promtail` flag tells {{< param "PRODUCT_NAME" >}} to convert your Promtail configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the new configuration. This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing Promtail configuration infrastructure. @@ -96,7 +96,7 @@ This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your exist [Run][run alloy] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=promtail`. Your configuration file must be a valid Promtail configuration file rather than an {{< param "PRODUCT_NAME" >}} configuration file. -### Debugging +### Debug 1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. @@ -107,7 +107,7 @@ Your configuration file must be a valid Promtail configuration file rather than {{< admonition type="caution" >}} If you bypass the errors, the behavior of the converted configuration may not match the original Promtail configuration. - Do not use this flag in a production environment. + Don't use this flag in a production environment. {{< /admonition >}} ## Example @@ -143,22 +143,22 @@ The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: ```alloy local.file_match "example" { - path_targets = [{ - __address__ = "localhost", - __path__ = "/var/log/*.log", - }] + path_targets = [{ + __address__ = "localhost", + __path__ = "/var/log/*.log", + }] } loki.source.file "example" { - targets = local.file_match.example.targets - forward_to = [loki.write.default.receiver] + targets = local.file_match.example.targets + forward_to = [loki.write.default.receiver] } loki.write "default" { - endpoint { - url = "http://localhost/loki/api/v1/push" - } - external_labels = {} + endpoint { + url = "http://localhost/loki/api/v1/push" + } + external_labels = {} } ``` @@ -174,11 +174,11 @@ The following list is specific to the convert command and not {{< param "PRODUCT * Check if you are setting any environment variables, whether [expanded in the configuration file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`. * In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location. Refer to the [loki.source.file][] documentation for more details. - Check if you have any existing setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions file path. + Check if you have any existing setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions path. * Metamonitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Promtail metamonitoring metrics but will use a different name. Make sure that you use the new metric names, for example, in your alerts and dashboards queries. * The logs produced by {{< param "PRODUCT_NAME" >}} will differ from those produced by Promtail. -* {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][], which differs from Promtail's Web UI. +* {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][], which differs from the Promtail Web UI. [Promtail]: https://www.grafana.com/docs/loki//clients/promtail/ [debugging]: #debugging diff --git a/docs/sources/set-up/migrate/from-static.md b/docs/sources/set-up/migrate/from-static.md index f19ef4d42d..cac0e41b44 100644 --- a/docs/sources/set-up/migrate/from-static.md +++ b/docs/sources/set-up/migrate/from-static.md @@ -95,7 +95,7 @@ This conversion allows you to take full advantage of the many additional feature ## Run a Grafana Agent Static mode configuration -If you’re not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your Grafana Agent Static configuration. +If you're not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your Grafana Agent Static configuration. The `--config.format=static` flag tells {{< param "PRODUCT_NAME" >}} to convert your Grafana Agent Static configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the configuration. This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Grafana Agent Static configuration infrastructure. @@ -104,11 +104,11 @@ This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Grafa [Run][] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=static`. Your configuration file must be a valid Grafana Agent Static configuration file. -### Debugging +### Debug 1. Follow the convert CLI command [debugging][] instructions to generate a diagnostic report. -1. Refer to the {{< param "PRODUCT_NAME" >}} [debugging UI][UI] for more information about running {{< param "PRODUCT_NAME" >}}. +1. Refer to the {{< param "PRODUCT_NAME" >}} [debugging UI][UI_debug] for more information about running {{< param "PRODUCT_NAME" >}}. 1. If your Grafana Agent Static configuration can't be converted and loaded directly into {{< param "PRODUCT_NAME" >}}, diagnostic information is sent to `stderr`. You can use the `--config.bypass-conversion-errors` flag with `--config.format=static` to bypass any non-critical issues and start {{< param "PRODUCT_NAME" >}}. @@ -332,9 +332,10 @@ You can convert [integrations next][] configurations by adding the `extra-args` alloy convert --source-format=static --extra-args="-enable-features=integrations-next" --output= ``` - Replace the following: - * _``_: The full path to the configuration file for Grafana Agent Static. - * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. +Replace the following: + +* _``_: The full path to the configuration file for Grafana Agent Static. +* _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. ## Environment variables @@ -376,7 +377,7 @@ The following list is specific to the convert command and not {{< param "PRODUCT [convert]: ../../../reference/cli/convert/ [run]: ../../../reference/cli/run/ [run alloy]: ../../../set-up/run/ -[UI]: ../../../troubleshoot/debug/ +[UI_debug]: ../../../troubleshoot/debug/ [configuration]: ../../../get-started/configuration-syntax/ [Integrations next]: https://grafana.com/docs/agent/latest/static/configuration/integrations/integrations-next/ [Agent Management]: https://grafana.com/docs/agent/latest/static/configuration/agent-management/ From ad098e654ea17ab83e97c0719c6ddd2adf31b34b Mon Sep 17 00:00:00 2001 From: Clayton Cornell Date: Mon, 25 Nov 2024 10:10:45 -0800 Subject: [PATCH 03/12] More work on removing linting errors --- docs/sources/collect/choose-component.md | 13 ++- .../sources/collect/datadog-traces-metrics.md | 40 ++++---- docs/sources/collect/ecs-openteletry-data.md | 4 +- docs/sources/collect/logs-in-kubernetes.md | 43 ++++----- docs/sources/collect/metamonitoring.md | 37 ++++---- docs/sources/collect/opentelemetry-data.md | 43 ++++----- .../collect/opentelemetry-to-lgtm-stack.md | 94 +++++++++---------- docs/sources/introduction/_index.md | 11 --- docs/sources/set-up/deploy.md | 28 +++--- docs/sources/set-up/install/binary.md | 2 +- docs/sources/set-up/install/chef.md | 2 +- docs/sources/set-up/install/docker.md | 15 ++- docs/sources/set-up/migrate/from-flow.md | 6 +- docs/sources/set-up/migrate/from-otelcol.md | 4 +- .../sources/set-up/migrate/from-prometheus.md | 4 +- docs/sources/set-up/migrate/from-promtail.md | 20 ++-- docs/sources/set-up/migrate/from-static.md | 2 +- docs/sources/set-up/run/binary.md | 2 +- 18 files changed, 173 insertions(+), 197 deletions(-) diff --git a/docs/sources/collect/choose-component.md b/docs/sources/collect/choose-component.md index 05f9d4df0b..36a880d54c 100644 --- a/docs/sources/collect/choose-component.md +++ b/docs/sources/collect/choose-component.md @@ -17,10 +17,9 @@ The components you select and configure depend on the telemetry signals you want ## Metrics for infrastructure Use `prometheus.*` components to collect infrastructure metrics. -This will give you the best experience with [Grafana Infrastructure Observability][]. +This gives you the best experience with [Grafana Infrastructure Observability][]. -For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, -and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. +For example, you can get metrics for a Linux host using `prometheus.exporter.unix`, and metrics for a MongoDB instance using `prometheus.exporter.mongodb`. You can also scrape any Prometheus endpoint using `prometheus.scrape`. Use `discovery.*` components to find targets for `prometheus.scrape`. @@ -30,7 +29,7 @@ Use `discovery.*` components to find targets for `prometheus.scrape`. ## Metrics for applications Use `otelcol.receiver.*` components to collect application metrics. -This will give you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. +This gives you the best experience with [Grafana Application Observability][], which is OpenTelemetry-native. For example, use `otelcol.receiver.otlp` to collect metrics from OpenTelemetry-instrumented applications. @@ -48,12 +47,12 @@ with logs collected by `loki.*` components. For example, the label that both `prometheus.*` and `loki.*` components would use for a Kubernetes namespace is called `namespace`. On the other hand, gathering logs using an `otelcol.*` component might use the [OpenTelemetry semantics][OTel-semantics] label called `k8s.namespace.name`, -which wouldn't correspond to the `namespace` label that is common in the Prometheus ecosystem. +which wouldn't correspond to the `namespace` label that's common in the Prometheus ecosystem. ## Logs from applications Use `otelcol.receiver.*` components to collect application logs. -This will gather the application logs in an OpenTelemetry-native way, making it easier to +This gathers the application logs in an OpenTelemetry-native way, making it easier to correlate the logs with OpenTelemetry metrics and traces coming from the application. All application telemetry must follow the [OpenTelemetry semantic conventions][OTel-semantics], simplifying this correlation. @@ -65,7 +64,7 @@ For example, if your application runs on Kubernetes, every trace, log, and metri Use `otelcol.receiver.*` components to collect traces. -If your application is not yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. +If your application isn't yet instrumented for tracing, use `beyla.ebpf` to generate traces for it automatically. ## Profiles diff --git a/docs/sources/collect/datadog-traces-metrics.md b/docs/sources/collect/datadog-traces-metrics.md index 034a093e8c..dcf0c9c054 100644 --- a/docs/sources/collect/datadog-traces-metrics.md +++ b/docs/sources/collect/datadog-traces-metrics.md @@ -20,9 +20,9 @@ This topic describes how to: ## Before you begin -* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and/or traces. +* Ensure that at least one instance of the [Datadog Agent][] is collecting metrics and traces. * Identify where you will write the collected telemetry. - Metrics can be written to [Prometheus]() or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. + Metrics can be written to [Prometheus][] or any other OpenTelemetry-compatible database such as Grafana Mimir, Grafana Cloud, or Grafana Enterprise Metrics. Traces can be written to Grafana Tempo, Grafana Cloud, or Grafana Enterprise Traces. * Be familiar with the concept of [Components][] in {{< param "PRODUCT_NAME" >}}. @@ -45,7 +45,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces will be sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. + * _``_: The full URL of the OpenTelemetry-compatible endpoint where metrics and traces are sent, such as `https://otlp-gateway-prod-eu-west-2.grafana.net/otlp`. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -58,8 +58,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -78,7 +78,7 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data ```alloy otelcol.processor.deltatocumulative "default" { - max_stale = “” + max_stale = "" max_streams = output { metrics = [otelcol.processor.batch.default.input] @@ -88,14 +88,14 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: How long until a series not receiving new samples is removed, such as "5m". - - _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. + * _``_: How long until a series not receiving new samples is removed, such as "5m". + * _``_: The upper limit of streams to track. New streams exceeding this limit are dropped. 1. Add the following `otelcol.receiver.datadog` component to your configuration file. ```alloy otelcol.receiver.datadog "default" { - endpoint = “:” + endpoint = ":" output { metrics = [otelcol.processor.deltatocumulative.default.input] traces = [otelcol.processor.batch.default.input] @@ -105,8 +105,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The host address where the receiver will listen. - - _``_: The port where the receiver will listen. + * _``_: The host address where the receiver listens. + * _``_: The port where the receiver listens. 1. If your endpoint requires basic authentication, paste the following inside the `endpoint` block. @@ -119,8 +119,8 @@ The [otelcol.exporter.otlp][] component is responsible for delivering OTLP data Replace the following: - - _``_: The basic authentication username. - - _``_: The basic authentication password or API key. + * _``_: The basic authentication username. + * _``_: The basic authentication password or API key. ## Configure Datadog Agent to forward telemetry to the {{% param "PRODUCT_NAME" %}} Datadog Receiver @@ -139,10 +139,10 @@ We recommend this approach for current Datadog users who want to try using {{< p Replace the following: - - _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. - - _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. + * _``_: The hostname where the {{< param "PRODUCT_NAME" >}} receiver is found. + * _``_: The port where the {{< param "PRODUCT_NAME" >}} receiver is exposed. -Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. +Alternatively, you might want your Datadog Agent to send metrics only to {{< param "PRODUCT_NAME" >}}. You can do this by setting up your Datadog Agent in the following way: 1. Replace the DD_URL in the configuration YAML: @@ -150,8 +150,8 @@ You can do this by setting up your Datadog Agent in the following way: ```yaml dd_url: http://: ``` -Or by setting an environment variable: +Or by setting an environment variable: ```bash DD_DD_URL='{"http://:": ["datadog-receiver"]}' @@ -169,7 +169,5 @@ To use this component, you need to start {{< param "PRODUCT_NAME" >}} with addit [Datadog]: https://www.datadoghq.com/ [Datadog Agent]: https://docs.datadoghq.com/agent/ [Prometheus]: https://prometheus.io -[OTLP]: https://opentelemetry.io/docs/specs/otlp/ -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp -[Components]: ../../get-started/components +[otelcol.exporter.otlp]: ../../reference/components/otelcol/otelcol.exporter.otlp/ +[Components]: ../../get-started/components/ diff --git a/docs/sources/collect/ecs-openteletry-data.md b/docs/sources/collect/ecs-openteletry-data.md index 3a7a53a483..9cfc705fa7 100644 --- a/docs/sources/collect/ecs-openteletry-data.md +++ b/docs/sources/collect/ecs-openteletry-data.md @@ -14,7 +14,7 @@ There are three different ways you can use {{< param "PRODUCT_NAME" >}} to colle 1. [Use a custom OpenTelemetry configuration file from the SSM Parameter store](#use-a-custom-opentelemetry-configuration-file-from-the-ssm-parameter-store). 1. [Create an ECS task definition](#create-an-ecs-task-definition). -1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar). +1. [Run {{< param "PRODUCT_NAME" >}} directly in your instance, or as a Kubernetes sidecar](#run-alloy-directly-in-your-instance-or-as-a-kubernetes-sidecar) ## Before you begin @@ -82,7 +82,7 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, Xray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. -## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar +## Run Alloy directly in your instance, or as a Kubernetes sidecar SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate. diff --git a/docs/sources/collect/logs-in-kubernetes.md b/docs/sources/collect/logs-in-kubernetes.md index 3e02efa808..37c4f0f132 100644 --- a/docs/sources/collect/logs-in-kubernetes.md +++ b/docs/sources/collect/logs-in-kubernetes.md @@ -56,9 +56,9 @@ To configure a `loki.write` component for logs delivery, complete the following Replace the following: - - _`