From 90253d34a53226d260589c52b2a3e14808d94e4a Mon Sep 17 00:00:00 2001
From: Clayton Cornell <131809008+clayton-cornell@users.noreply.github.com>
Date: Wed, 3 Apr 2024 13:21:24 -0700
Subject: [PATCH] Edit cycle on Alloy docs to catch typos and rework some text
(#109)
* Fix typo in heading
* Updates from review
* Fix command description
* Style updates for some reference topics
* Modify and rework tutorials and add prereqs
---
.../configuration-syntax/components.md | 4 +-
docs/sources/introduction/_index.md | 4 +-
.../reference/components/discovery.azure.md | 48 ++++++-------
.../reference/components/discovery.consul.md | 12 ++--
.../components/discovery.consulagent.md | 67 +++++++++----------
.../components/discovery.digitalocean.md | 18 ++---
docs/sources/tasks/migrate/_index.md | 2 +-
docs/sources/tasks/migrate/from-operator.md | 8 +--
.../first-components-and-stdlib/index.md | 53 +++++++++------
docs/sources/tutorials/get-started.md | 25 +++----
.../logs-and-relabeling-basics/index.md | 66 +++++++++++-------
.../tutorials/processing-logs/index.md | 49 ++++++++------
12 files changed, 197 insertions(+), 159 deletions(-)
diff --git a/docs/sources/concepts/configuration-syntax/components.md b/docs/sources/concepts/configuration-syntax/components.md
index d33a4a071f..5114e4c570 100644
--- a/docs/sources/concepts/configuration-syntax/components.md
+++ b/docs/sources/concepts/configuration-syntax/components.md
@@ -1,11 +1,11 @@
---
canonical: https://grafana.com/docs/alloy/latest/concepts/configuration-syntax/components/
description: Learn about the components configuration language
-title: Components configuration language
+title: Components configuration
weight: 300
---
-# Components configuration language
+# Components configuration
Components are the defining feature of {{< param "PRODUCT_NAME" >}}.
Components are small, reusable pieces of business logic that perform a single task like retrieving secrets or collecting Prometheus metrics, and you can wire them together to form programmable pipelines of telemetry data.
diff --git a/docs/sources/introduction/_index.md b/docs/sources/introduction/_index.md
index 5c21b5434e..ba31aa20d0 100644
--- a/docs/sources/introduction/_index.md
+++ b/docs/sources/introduction/_index.md
@@ -75,9 +75,9 @@ binaries and images with BoringCrypto enabled. Builds and Docker images for Linu
* [Install][] {{< param "PRODUCT_NAME" >}}.
* Learn about the core [Concepts][] of {{< param "PRODUCT_NAME" >}}.
-* Follow the [Tutorials][] for hands-on learning of {{< param "PRODUCT_NAME" >}}.
+* Follow the [Tutorials][] for hands-on learning about {{< param "PRODUCT_NAME" >}}.
* Consult the [Tasks][] instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}.
-* Check out the [Reference][] documentation to find specific information you might be looking for.
+* Check out the [Reference][] documentation to find information about the Alloy components, configuration blocks, and command line tools.
[OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/
[Install]: ../get-started/install/
diff --git a/docs/sources/reference/components/discovery.azure.md b/docs/sources/reference/components/discovery.azure.md
index 985c295baf..d558f902f3 100644
--- a/docs/sources/reference/components/discovery.azure.md
+++ b/docs/sources/reference/components/discovery.azure.md
@@ -13,7 +13,7 @@ title: discovery.azure
## Usage
```alloy
-discovery.azure "LABEL" {
+discovery.azure "
-## Visualizing the relationship between components
+## Visualize the relationship between components
The following diagram is an example pipeline:
@@ -172,26 +182,28 @@ The following diagram is an example pipeline:
-The preceding configuration defines three components:
+Your pipeline configuration defines three components:
- `prometheus.scrape` - A component that scrapes metrics from components that export targets.
- `prometheus.exporter.unix` - A component that exports metrics from the host, built around [node_exporter][].
- `prometheus.remote_write` - A component that sends metrics to a Prometheus remote-write compatible endpoint.
The `prometheus.scrape` component references the `prometheus.exporter.unix` component's targets export, which is a list of scrape targets.
-The `prometheus.scrape` component then forwards the scraped metrics to the `prometheus.remote_write` component.
+The `prometheus.scrape` component forwards the scraped metrics to the `prometheus.remote_write` component.
One rule is that components can't form a cycle.
This means that a component can't reference itself directly or indirectly.
This is to prevent infinite loops from forming in the pipeline.
-## Exercise for the reader
+## Exercise
-**Recommended Reading**
+The following exercise guides you through modifying your pipeline to scrape metrics from Redis.
+
+### Recommended Reading
- Optional: [prometheus.exporter.redis][]
-Let's start a container running Redis and configure {{< param "PRODUCT_NAME" >}} to scrape metrics from it.
+Start a container running Redis and configure {{< param "PRODUCT_NAME" >}} to scrape the metrics.
```bash
docker container run -d --name alloy-redis -p 6379:6379 --rm redis
@@ -206,13 +218,13 @@ To give a visual hint, you want to create a pipeline that looks like this:
-{{< admonition type="note" >}}
-You may find the [concat][] standard library function useful.
+{{< admonition type="tip" >}}
+Refer to the [concat][] standard library function for information about combining lists of values into a single list.
[concat]: ../../reference/stdlib/concat/
{{< /admonition >}}
-You can run {{< param "PRODUCT_NAME" >}} with the new configuration file by running:
+You can run {{< param "PRODUCT_NAME" >}} with the new configuration file with the following command:
```bash
run config.alloy
@@ -282,9 +294,10 @@ If you look in the directory, do you notice anything interesting? The directory
If you'd like to store the data elsewhere, you can specify a different directory by supplying the `--storage.path` flag to {{< param "PRODUCT_NAME" >}}'s run command, for example, ` run config.alloy --storage.path /etc/alloy`. Replace _``_ with the path to the {{< param "PRODUCT_NAME" >}} binary.
Generally, you can use a persistent directory for this, as some components may use the data stored in this directory to perform their function.
-In the next tutorial, you will look at how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki.
-You will also look at using different components to process metrics and logs before sending them.
+In the next tutorial, you learn how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki.
+You also learn how to use different components to process metrics and logs.
+[get started]: ../get-started/#set-up-a-local-grafana-instance
[Configuration syntax]: ../../concepts/configuration-syntax/
[Standard library documentation]: ../../reference/stdlib/
[node_exporter]: https://github.com/prometheus/node_exporter
diff --git a/docs/sources/tutorials/get-started.md b/docs/sources/tutorials/get-started.md
index a9459b37ed..afeacfc96e 100644
--- a/docs/sources/tutorials/get-started.md
+++ b/docs/sources/tutorials/get-started.md
@@ -5,23 +5,24 @@ title: Get started
weight: 10
---
-## Who is this for?
+## Get started with {{% param "PRODUCT_NAME" %}}
-This set of tutorials contains a collection of examples that build on each other to demonstrate how to configure and use [{{< param "PRODUCT_NAME" >}}][alloy]. It assumes you have a basic understanding of what {{< param "PRODUCT_NAME" >}} is and telemetry collection in general. It also assumes a base level of familiarity with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation. It assumes no knowledge of the {{< param "PRODUCT_NAME" >}} configuration syntax concepts.
+This set of tutorials contains a collection of examples that build on each other to demonstrate how to configure and use [{{< param "PRODUCT_NAME" >}}][alloy].
+To follow these tutorials, you need to have a basic understanding of what {{< param "PRODUCT_NAME" >}} is and telemetry collection in general.
+You should also be familiar with with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation.
+You don't need to know about the {{< param "PRODUCT_NAME" >}} [configuration syntax][configuration] concepts.
-## What is {{% param "PRODUCT_NAME" %}}?
+## Prerequisites
-{{< param "PRODUCT_NAME" >}} uses a [configuration syntax][configuration] that allows you to define a pipeline of telemetry collection, processing, and output.
-
-## What do I need to get started?
-
-You will need a Linux or Unix environment with Docker installed. The examples are designed to be run on a single host so that you can run them on your laptop or in a VM. You are encouraged to follow along with the examples using a `config.alloy` file and experiment with the examples yourself.
+The tutorials require a Linux or Unix environment with Docker installed.
+The examples run on a single host so that you can run them on your laptop or in a Virtual Machine.
+You are encouraged to try the examples using a `config.alloy` file and experiment with the examples yourself.
To run the examples, you should have an {{< param "PRODUCT_NAME" >}} binary available. You can follow the instructions on how to [Install {{< param "PRODUCT_NAME" >}} as a Standalone Binary][install] to get a binary.
-## How should I follow along?
+## Set up a local Grafana instance
-You can use this Docker Compose file to set up a local Grafana instance alongside Loki and Prometheus pre-configured as datasources. The examples are designed to be run locally, so you can follow along and experiment with them yourself.
+You can use the following Docker Compose file to set up a local Grafana instance alongside Loki and Prometheus which are pre-configured as datasources. You can run and experiment with the examples on your local system.
```yaml
version: '3'
@@ -77,9 +78,9 @@ services:
After running `docker-compose up`, open [http://localhost:3000](http://localhost:3000) in your browser to view the Grafana UI.
-The tutorials are designed to be followed in order and generally build on each other. Each example explains what it does and how it works. They are designed to be run locally, so you can follow along and experiment with them yourself.
+The tutorials are designed to be followed in order and generally build on each other. Each example explains what it does and how it works.
-The Recommended Reading sections in each tutorial provide a list of documentation topics. To help you understand the concepts used in the example, read the recommended topics in the order given.
+The Recommended Reading sections in each tutorial provide a list of documentation topics. Read the recommended topics in the order given to help you understand the concepts used in the example.
[alloy]: https://grafana.com/docs/alloy/latest/
[configuration]: ../../concepts/configuration-syntax/
diff --git a/docs/sources/tutorials/logs-and-relabeling-basics/index.md b/docs/sources/tutorials/logs-and-relabeling-basics/index.md
index 186ae248c7..1e8e769c36 100644
--- a/docs/sources/tutorials/logs-and-relabeling-basics/index.md
+++ b/docs/sources/tutorials/logs-and-relabeling-basics/index.md
@@ -7,19 +7,25 @@ weight: 30
# Logs and relabeling basics
-This tutorial assumes you have completed the [First components and introducing the standard library][] tutorial, or are at least familiar with the concepts of components, attributes, and expressions and how to use them.
-You will cover some basic metric relabeling, followed by how to send logs to Loki.
+This tutorial covers some basic metric relabeling, and shows you how to send logs to Loki.
+
+## Prerequisites
+
+Complete the [First components and the standard library][first] tutorial.
## Relabel metrics
-**Recommended reading**
+Now that you have built a basic pipeline and scraped some metrics, you can use the `prometheus.relabel` component to relabel metrics.
+
+### Recommended reading
- Optional: [prometheus.relabel][]
-Before moving on to logs, let's look at how we can use the `prometheus.relabel` component to relabel metrics.
+### Add a `prometheus.relabel` component to your pipeline
+
The `prometheus.relabel` component allows you to perform Prometheus relabeling on metrics and is similar to the `relabel_configs` section of a Prometheus scrape configuration.
-Let's add a `prometheus.relabel` component to a basic pipeline and see how to add labels.
+Add a `prometheus.relabel` component to a basic pipeline and add labels.
```alloy
prometheus.exporter.unix "localhost" { }
@@ -52,7 +58,7 @@ prometheus.remote_write "local_prom" {
}
```
-We have now created the following pipeline:
+You have created the following pipeline:
![Diagram of pipeline that scrapes prometheus.exporter.unix, relabels the metrics, and remote_writes them](/media/docs/agent/diagram-flow-by-example-relabel-0.svg)
@@ -82,16 +88,20 @@ If you would like to keep or act on these kinds of labels, use a [discovery.rela
## Send logs to Loki
-**Recommended reading**
+Now that you've created components and chained them together, you can collect some logs and send them to Loki.
+
+### Recommended reading
- Optional: [local.file_match][]
- Optional: [loki.source.file][]
- Optional: [loki.write][]
-Now that you're comfortable creating components and chaining them together, let's collect some logs and send them to Loki.
-We will use the `local.file_match` component to perform file discovery, the `loki.source.file` to collect the logs, and the `loki.write` component to send the logs to Loki.
+### Find and collect the logs
-Before doing this, make sure you have a log file to scrape. You can use the `echo` command to create a file with some log content.
+You can use the `local.file_match` component to perform file discovery, the `loki.source.file` to collect the logs, and the `loki.write` component to send the logs to Loki.
+
+Before doing this, make sure you have a log file to scrape.
+You can use the `echo` command to create a file with some log content.
```bash
mkdir -p /tmp/alloy-logs
@@ -137,15 +147,17 @@ If you delete this file, {{< param "PRODUCT_NAME" >}} starts reading from the be
## Exercise
-**Recommended reading**
+The following exercise guides you through adding a label to the logs, and filtering the results.
+
+### Recommended reading
- [loki.relabel][]
- [loki.process][]
### Add a Label to Logs
-This exercise will have two parts, building on the previous example.
-Let's start by adding an `os` label (just like the Prometheus example) to all of the logs we collect.
+This exercise has two parts, and builds on the previous example.
+Start by adding an `os` label (just like the Prometheus example) to all of the logs we collect.
Modify the following snippet to add the label `os` with the value of the `os` constant.
@@ -166,14 +178,14 @@ loki.write "local_loki" {
}
```
-{{< admonition type="note" >}}
+{{< admonition type="tip" >}}
You can use the [loki.relabel][] component to relabel and add labels, just like you can with the [prometheus.relabel][] component.
[loki.relabel]: ../../reference/components/loki.relabel
[prometheus.relabel]: ../../reference/components/prometheus.relabel
{{< /admonition >}}
-Once you have your completed configuration, run {{< param "PRODUCT_NAME" >}} and execute the following:
+Run {{< param "PRODUCT_NAME" >}} and execute the following:
```bash
echo 'level=info msg="INFO: This is an info level log!"' >> /tmp/alloy-logs/log.log
@@ -182,9 +194,9 @@ echo 'level=debug msg="DEBUG: This is a debug level log!"' >> /tmp/alloy-logs/lo
```
Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`.
-Try querying for `{filename="/tmp/alloy-logs/log.log"}` and see if you can find the new label!
+Try querying for `{filename="/tmp/alloy-logs/log.log"}` and see if you can find the new label.
-Now that we have added new labels, we can also filter on them. Try querying for `{os!=""}`.
+Now that you have added new labels, you can also filter on them. Try querying for `{os!=""}`.
You should only see the lines you added in the previous step.
{{< collapse title="Solution" >}}
@@ -223,23 +235,23 @@ loki.write "local_loki" {
{{< admonition type="note" >}}
This exercise is more challenging than the previous one.
-If you are having trouble, skip it and move to the next section, which will cover some of the concepts used here.
+If you are having trouble, skip it and move to the next section, which covers some of the concepts used here.
You can always come back to this exercise later.
{{< /admonition >}}
-This exercise will build on the previous one, though it's more involved.
+This exercise builds on the previous one, though it's more involved.
-Let's say we want to extract the `level` from the logs and add it as a label. As a starting point, look at [loki.process][].
+Assume you want to extract the `level` from the logs and add it as a label. As a starting point, look at [loki.process][].
This component allows you to perform processing on logs, including extracting values from log contents.
Try modifying your configuration from the previous section to extract the `level` from the logs and add it as a label.
If needed, you can find a solution to the previous exercise at the end of the [previous section](#add-a-label-to-logs).
-{{< admonition type="note" >}}
+{{< admonition type="tip" >}}
The `stage.logfmt` and `stage.labels` blocks for `loki.process` may be helpful.
{{< /admonition >}}
-Once you have your completed configuration, run {{< param "PRODUCT_NAME" >}} and execute the following:
+Run {{< param "PRODUCT_NAME" >}} and execute the following:
```bash
echo 'level=info msg="INFO: This is an info level log!"' >> /tmp/alloy-logs/log.log
@@ -247,7 +259,8 @@ echo 'level=warn msg="WARN: This is a warn level log!"' >> /tmp/alloy-logs/log.l
echo 'level=debug msg="DEBUG: This is a debug level log!"' >> /tmp/alloy-logs/log.log
```
-Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. Try querying for `{level!=""}` to see the new labels in action.
+Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`.
+Try querying for `{level!=""}` to see the new labels in action.
![Grafana Explore view of example log lines, now with the extracted 'level' label](/media/docs/agent/screenshot-flow-by-example-log-line-levels.png)
@@ -307,10 +320,11 @@ loki.write "local_loki" {
## Finishing up and next steps
-You have learned the concepts of components, attributes, and expressions. You have also seen how to use some standard library components to collect metrics and logs.
-In the next tutorial, you will learn more about how to use the `loki.process` component to extract values from logs and use them.
+You have learned the concepts of components, attributes, and expressions.
+You have also seen how to use some standard library components to collect metrics and logs.
+In the next tutorial, you learn more about how to use the `loki.process` component to extract values from logs and use them.
-[First components and introducing the standard library]: ../first-components-and-stdlib/
+[first]: ../first-components-and-stdlib/
[prometheus.relabel]: ../../reference/components/prometheus.relabel/
[constants]: ../../reference/stdlib/constants/
[localhost:3000/explore]: http://localhost:3000/explore
diff --git a/docs/sources/tutorials/processing-logs/index.md b/docs/sources/tutorials/processing-logs/index.md
index 05f551a2b2..9ee257c2d6 100644
--- a/docs/sources/tutorials/processing-logs/index.md
+++ b/docs/sources/tutorials/processing-logs/index.md
@@ -10,20 +10,26 @@ weight: 40
This tutorial assumes you are familiar with setting up and connecting components.
It covers using `loki.source.api` to receive logs over HTTP, processing and filtering them, and sending them to Loki.
-## Receive logs over HTTP and Process
+## Prerequisites
-**Recommended reading**
+Complete the [Logs and relabeling basics][logs] tutorial.
-- Optional: [loki.source.api][]
+## Receive logs over HTTP and Process
The `loki.source.api` component can receive logs over HTTP.
It can be useful for receiving logs from other {{< param "PRODUCT_NAME" >}}s or collectors, or directly from applications that can send logs over HTTP, and then processing them centrally.
+### Recommended reading
+
+- Optional: [loki.source.api][]
+
+### Set up the `loki.source.api` component
+
Your pipeline is going to look like this:
![Loki Source API Pipeline](/media/docs/agent/diagram-flow-by-example-logs-pipeline.svg)
-Let's start by setting up the `loki.source.api` component:
+Start by setting up the `loki.source.api` component:
```alloy
loki.source.api "listener" {
@@ -40,16 +46,19 @@ loki.source.api "listener" {
This is a simple configuration.
You are configuring the `loki.source.api` component to listen on `127.0.0.1:9999` and attach a `source="api"` label to the received log entries, which are then forwarded to the `loki.process.process_logs` component's exported receiver.
-Next, you can configure the `loki.process` and `loki.write` components.
## Process and Write Logs
-**Recommended reading**
+### Recommended reading
- [loki.process#stage.drop][]
- [loki.process#stage.json][]
- [loki.process#stage.labels][]
+### Configure the `loki.process` and `loki.write` components
+
+Now that you have set up the `loki.source.api` component, you can configure the `loki.process` and `loki.write` components.
+
```alloy
// Let's send and process more logs!
@@ -126,8 +135,7 @@ loki.write "local_loki" {
}
```
-You can skip to the next section if you successfully completed the previous section's exercises.
-If not, or if you were unsure how things worked, let's break down what's happening in the `loki.process` component.
+{{< collapse title="How the components work" >}}
Many of the `stage.*` blocks in `loki.process` act on reading or writing a shared map of values extracted from the logs.
You can think of this extracted map as a hashmap or table that each stage has access to, and it is referred to as the "extracted map" from here on.
@@ -151,7 +159,7 @@ Here is our example log line:
}
```
-### Stage 1
+#### Stage 1
```alloy
stage.json {
@@ -198,7 +206,7 @@ Extracted map _after_ performing this stage:
}
```
-### Stage 2
+#### Stage 2
```alloy
stage.timestamp {
@@ -212,7 +220,7 @@ The value of `ts` is parsed in the format of `RFC3339` and added as the timestam
This is useful if you want to use the timestamp present in the log itself, rather than the time the log is ingested.
This stage doesn't modify the extracted map.
-### Stage 3
+#### Stage 3
```alloy
stage.json {
@@ -278,7 +286,7 @@ Extracted map _after_ performing this stage:
}
```
-### Stage 4
+#### Stage 4
```alloy
stage.drop {
@@ -292,7 +300,7 @@ This stage drops the log line if the value of `is_secret` is `"true"` and doesn'
There are many other ways to filter logs, but this is a simple example.
Refer to the [loki.process#stage.drop][] documentation for more information.
-### Stage 5
+#### Stage 5
```alloy
stage.labels {
@@ -306,7 +314,7 @@ This stage adds a label to the log using the same shorthand as above (so this is
This stage adds a label with key `level` and the value of `level` in the extracted map to the log (`"info"` from our example log line).
This stage does not modify the extracted map.
-### Stage 6
+#### Stage 6
```alloy
stage.output {
@@ -319,9 +327,11 @@ Rather than sending the entire JSON blob to Loki, you are only sending `original
This stage doesn't modify the extracted map.
-## Putting it all together
+{{< /collapse >}}
+
+## Put it all together
-Now that you have all of the pieces, let's run {{< param "PRODUCT_NAME" >}} and send some logs to it.
+Now that you have all of the pieces, you can run {{< param "PRODUCT_NAME" >}} and send some logs to it.
Modify `config.alloy` with the configuration from the previous example and start {{< param "PRODUCT_NAME" >}} with:
```bash
@@ -344,7 +354,7 @@ Try executing the following, replacing the `"timestamp"` value:
curl localhost:9999/loki/api/v1/raw -XPOST -H "Content-Type: application/json" -d '{"log": {"is_secret": "false", "level": "debug", "message": "This is a debug message!"}, "timestamp": }'
```
-Now that you have sent some logs, let's see how they look in Grafana.
+Now that you have sent some logs, its time to see how they look in Grafana.
Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`.
Try querying for `{source="demo-api"}` and see if you can find the logs you sent.
@@ -355,8 +365,8 @@ You can also try adding more stages to the `loki.process` component to extract m
## Exercise
-Since you are already using Docker and Docker exports logs, let's get those logs into Loki.
-You can refer to the [discovery.docker][] and [loki.source.docker][] documentation for more information.
+Since you are already using Docker and Docker exports logs, you can send those logs to Loki.
+Refer to the [discovery.docker][] and [loki.source.docker][] documentation for more information.
To ensure proper timestamps and other labels, make sure you use a `loki.process` component to process the logs before sending them to Loki.
@@ -407,6 +417,7 @@ loki.write "local_loki" {
{{< /collapse >}}
+[logs]: ../logs-and-relabeling-basics/
[loki.source.api]: ../../reference/components/loki.source.api/
[loki.process#stage.drop]: ../../reference/components/loki.process/#stagedrop-block
[loki.process#stage.json]: ../../reference/components/loki.process/#stagejson-block