Skip to content

Commit

Permalink
Alloy docs linting cleanup and corrections (#2192)
Browse files Browse the repository at this point in the history
* Linting cleanup and corrections

* Regenerate main index file

* More lint cleanup and error fixing

* More linting corrections

* Fix broken links and correct syntax

* Update docs/sources/get-started/component_controller.md

Co-authored-by: Isabel Matwawana <[email protected]>

* Update docs/sources/get-started/component_controller.md

* Update docs/sources/get-started/component_controller.md

---------

Co-authored-by: Isabel Matwawana <[email protected]>
(cherry picked from commit 8c5320d)
  • Loading branch information
clayton-cornell committed Dec 3, 2024
1 parent 7cf5ff0 commit 742c35d
Show file tree
Hide file tree
Showing 75 changed files with 338 additions and 365 deletions.
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md.t
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/collect/ecs-opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,13 +84,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

## Run Alloy directly in your instance, or as a Kubernetes sidecar
## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar

SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.

You can also use your own method to connect to the Amazon ECS or AWS Fargate-managed container as long as you can pass the parameters needed to install and configure {{< param "PRODUCT_NAME" >}}.

### Install Grafana Alloy
### Install {{% param "PRODUCT_NAME" %}}

After connecting to your instance, follow the {{< param "PRODUCT_NAME" >}} [installation][install], [configuration][configure] and [deployment][deploy] instructions.

Expand Down
12 changes: 6 additions & 6 deletions docs/sources/collect/opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ This topic describes how to:
Before components can receive OpenTelemetry data, you must have a component responsible for exporting the OpenTelemetry data.
An OpenTelemetry _exporter component_ is responsible for writing (exporting) OpenTelemetry data to an external system.

In this task, you use the [otelcol.exporter.otlp][] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP).
In this task, you use the [`otelcol.exporter.otlp`][otelcol.exporter.otlp] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP).
After an exporter component is defined, you can use other {{< param "PRODUCT_NAME" >}} components to forward data to it.

{{< admonition type="tip" >}}
Expand Down Expand Up @@ -137,7 +137,7 @@ otelcol.receiver.otlp "example" {
}
```

For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.exporter.otlp][].
For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.exporter.otlp`][otelcol.exporter.otlp].

## Configure batching

Expand All @@ -146,7 +146,7 @@ Instead, data is usually sent to one or more _processor components_ that perform

Ensuring data is batched is a production-readiness step to improve data compression and reduce the number of outgoing network requests to external systems.

In this task, you configure an [otelcol.processor.batch][] component to batch data before sending it to the exporter.
In this task, you configure an [`otelcol.processor.batch`][otelcol.processor.batch] component to batch data before sending it to the exporter.

{{< admonition type="note" >}}
Refer to the list of available [Components][] for the full list of `otelcol.processor` components that you can use to process OpenTelemetry data.
Expand Down Expand Up @@ -210,14 +210,14 @@ otelcol.exporter.otlp "default" {
}
```

For more information on configuring OpenTelemetry data batching, refer to [otelcol.processor.batch][].
For more information on configuring OpenTelemetry data batching, refer to [`otelcol.processor.batch`][otelcol.processor.batch].

## Configure an OpenTelemetry Protocol receiver

You can configure {{< param "PRODUCT_NAME" >}} to receive OpenTelemetry metrics, logs, and traces.
An OpenTelemetry _receiver_ component is responsible for receiving OpenTelemetry data from an external system.

In this task, you use the [otelcol.receiver.otlp][] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP).
In this task, you use the [`otelcol.receiver.otlp`][otelcol.receiver.otlp] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP).
You can configure a receiver component to forward received data to other {{< param "PRODUCT_NAME" >}} components.

> Refer to the list of available [Components][] for the full list of
Expand Down Expand Up @@ -312,7 +312,7 @@ otelcol.exporter.otlp "default" {
}
```

For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.receiver.otlp][].
For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.receiver.otlp`][otelcol.receiver.otlp].

[OpenTelemetry]: https://opentelemetry.io
[Configure an OpenTelemetry Protocol exporter]: #configure-an-opentelemetry-protocol-exporter
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/configure/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This page describes how to apply a new configuration to {{< param "PRODUCT_NAME"
It assumes that:

- You have [installed {{< param "PRODUCT_NAME" >}} on Kubernetes using the Helm chart][k8s-install].
- You already have a new {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation.
- You already have a {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation.

Refer to [Collect and forward data][collect] for information about configuring {{< param "PRODUCT_NAME" >}} to collect and forward data.

Expand All @@ -25,15 +25,15 @@ Refer to [Collect and forward data][collect] for information about configuring {

To modify {{< param "PRODUCT_NAME" >}}'s Helm chart configuration, perform the following steps:

1. Create a local `values.yaml` file with a new Helm chart configuration.
1. Create a local `values.yaml` file with a Helm chart configuration.

1. You can use your own copy of the values file or download a copy of the
default [values.yaml][].
default [`values.yaml`][values.yaml].

1. Make changes to your `values.yaml` to customize settings for the
Helm chart.

Refer to the inline documentation in the default [values.yaml][] for more
Refer to the inline documentation in the default [`values.yaml`][values.yaml] for more
information about each option.

1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation:
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/configure/nonroot.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ You can configure a non-root user when you deploy {{< param "PRODUCT_NAME" >}} i
{{< admonition type="note" >}}
Running {{< param "PRODUCT_NAME" >}} as a non-root user won't work if you are using components like [beyla.ebpf][] that require root rights.

[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/
{{< /admonition >}}

To run {{< param "PRODUCT_NAME" >}} as a non-root user, configure a [security context][] for the {{< param "PRODUCT_NAME" >}} container. If you are using the [Grafana Helm chart][] you can add the following snippet to `values.yaml`:
Expand All @@ -45,6 +45,6 @@ Not really. The Linux kernel prevents Docker containers from accessing host reso
However, if there was a bug in the Linux kernel that allowed Docker containers to break out of the virtual environment, it would likely be easier to exploit this bug with a root user than with a non-root user. It's worth noting that the attacker would not only need to find such a Linux kernel bug, but would also need to find a way to make {{< param "PRODUCT_NAME" >}} exploit that bug.

[image]: https://hub.docker.com/r/grafana/alloy
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/
[security context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[Grafana Helm chart]: ../../configure/kubernetes/#configure-the-helm-chart
4 changes: 2 additions & 2 deletions docs/sources/data-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ All newly reported data is documented in the CHANGELOG.

You can use the `--disable-reporting` [command line flag][] to disable the reporting and opt-out of the data collection.

[components]: ../get-started/components
[command line flag]: ../reference/cli/run
[components]: ../get-started/components/
[command line flag]: ../reference/cli/run/
16 changes: 8 additions & 8 deletions docs/sources/get-started/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ prometheus.scrape "default" {
}
```

A cluster state change is detected when a new node joins or an existing node leaves.
All participating components locally recalculate target ownership and re-balance the number of targets theyre scraping without explicitly communicating ownership over the network.
A cluster state change is detected when a node joins or a node leaves.
All participating components locally recalculate target ownership and re-balance the number of targets they're scraping without explicitly communicating ownership over the network.

Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}} deployments to distribute workload during peaks.
It also provides resiliency because targets are automatically picked up by one of the node peers if a node leaves.
Expand All @@ -50,20 +50,20 @@ It also provides resiliency because targets are automatically picked up by one o

Refer to component reference documentation to discover whether it supports clustering, such as:

- [prometheus.scrape][]
- [pyroscope.scrape][]
- [prometheus.operator.podmonitors][]
- [prometheus.operator.servicemonitors][]
- [`prometheus.scrape`][prometheus.scrape]
- [`pyroscope.scrape`][pyroscope.scrape]
- [`prometheus.operator.podmonitors`][prometheus.operator.podmonitors]
- [`prometheus.operator.servicemonitors`][prometheus.operator.servicemonitors]

## Cluster monitoring and troubleshooting

You can use the {{< param "PRODUCT_NAME" >}} UI [clustering page][] to monitor your cluster status.
Refer to [Debugging clustering issues][debugging] for additional troubleshooting information.
Refer to [Debug clustering issues][debugging] for additional troubleshooting information.

[run]: ../../reference/cli/run/#clustering
[prometheus.scrape]: ../../reference/components/prometheus/prometheus.scrape/#clustering-block
[pyroscope.scrape]: ../../reference/components/pyroscope/pyroscope.scrape/#clustering-block
[prometheus.operator.podmonitors]: ../../reference/components/prometheus/prometheus.operator.podmonitors/#clustering-block
[prometheus.operator.servicemonitors]: ../../reference/components/prometheus/prometheus.operator.servicemonitors/#clustering-block
[clustering page]: ../../troubleshoot/debug/#clustering-page
[debugging]: ../../troubleshoot/debug/#debugging-clustering-issues
[debugging]: ../../troubleshoot/debug/#debug-clustering-issues
6 changes: 3 additions & 3 deletions docs/sources/get-started/community_components.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ weight: 100

__Community components__ are [components][Components] implemented and maintained by the community.

While Grafana does not offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository.
While Grafana doesn't offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository.

To use these community components, you must explicitly pass the `--feature.community-components.enabled` flag to the `run` command.

__Community components__ don't have a stability level. They aren't covered by our [backward compatibility strategy][backward-compatibility].
__Community components__ don't have a stability level. They aren't covered by the [backward compatibility strategy][backward-compatibility].

{{< admonition type="warning" >}}
__Community components__ without a maintainer may be disabled or removed if the components prevent or block the development of {{< param "PRODUCT_NAME" >}}.
{{< /admonition >}}

[Components]: ../components/
[backward-compatibility]: ../../introduction/backward-compatibility/
[backward-compatibility]: ../../introduction/backward-compatibility/
6 changes: 3 additions & 3 deletions docs/sources/get-started/component_controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ The overall health of a component is determined by combining the controller-repo
An individual component's health is independent of the health of any other components it references.
A component can be marked as healthy even if it references an exported field of an unhealthy component.

## Handling evaluation failures
## Evaluation failures

When a component fails to evaluate, it's marked as unhealthy with the reason for why the evaluation failed.

Expand All @@ -93,7 +93,7 @@ If your `local.file` component, which watches API keys, suddenly stops working,

## In-memory traffic

Components that expose HTTP endpoints, such as [prometheus.exporter.unix][], can expose an internal address that completely bypasses the network and communicate in-memory.
Components that expose HTTP endpoints, such as [`prometheus.exporter.unix`][prometheus.exporter.unix], can expose an internal address that completely bypasses the network and communicate in-memory.
Components within the same process can communicate with one another without needing to be aware of any network-level protections such as authentication or mutual TLS.

The internal address defaults to `alloy.internal:12345`.
Expand All @@ -102,7 +102,7 @@ If this address collides with a real target on your network, change it to someth
Components must opt-in to using in-memory traffic.
Refer to the individual documentation for components to learn if in-memory traffic is supported.

## Updating the configuration file
## Configuration file updates

The `/-/reload` HTTP endpoint and the `SIGHUP` signal can inform the component controller to reload the configuration file.
When this happens, the component controller synchronizes the set of running components with the ones in the configuration file,
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/get-started/configuration-syntax/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,15 @@ loki.write "local_loki" {

The {{< param "PRODUCT_NAME" >}} syntax aims to reduce errors in configuration files by making configurations easier to read and write.
The {{< param "PRODUCT_NAME" >}} syntax uses blocks, attributes, and expressions.
The blocks can be copied and pasted from the documentation to help you get started as quickly as possible.
You can copy and paste the blocks from the documentation to help you get started as quickly as possible.

The {{< param "PRODUCT_NAME" >}} syntax is declarative, so ordering components, blocks, and attributes does not matter.
The {{< param "PRODUCT_NAME" >}} syntax is declarative, so ordering components, blocks, and attributes doesn't matter.
The relationship between components determines the order of operations in the pipeline.

## Blocks

You use _Blocks_ to configure components and groups of attributes.
Each block can contain any number of attributes or nested blocks.
Each block can contain any number of attributes or nested blocks.
Blocks are steps in the overall pipeline expressed by the configuration.

```alloy
Expand Down Expand Up @@ -111,7 +111,7 @@ The {{< param "PRODUCT_NAME" >}} syntax supports complex expressions, for exampl

You can use expressions for any attribute inside a component definition.

### Referencing component exports
### Reference component exports

The most common expression is to reference the exports of a component, for example, `local.file.password_file.content`.
You form a reference to a component's exports by merging the component's name (for example, `local.file`),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ local.file "targets" {
}
```

## Referencing components
## Reference components

To wire components together, one can use the exports of one as the arguments to another by using references.
References can only appear in components.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ title: Referencing component exports
weight: 200
---

# Referencing component exports
# Reference component exports

Referencing exports enables {{< param "PRODUCT_NAME" >}} to configure and connect components dynamically using expressions.
While components can work in isolation, they're more useful when one component's behavior and data flow are bound to the exports of another, building a dependency relationship between the two.

Such references can only appear as part of another component's arguments or a configuration block's fields.
Components can't reference themselves.

## Using references
## Use references

You build references by combining the component's name, label, and named export with dots.

Expand Down Expand Up @@ -45,7 +45,7 @@ In the preceding example, you wired together a very simple pipeline by writing a

{{< figure src="/media/docs/alloy/diagram-referencing-exports.png" alt="Example of a pipeline" >}}

After the value is resolved, it must match the [type][] of the attribute it is assigned to.
After the value is resolved, it must match the [type][] of the attribute it's assigned to.
While you can only configure attributes using the basic {{< param "PRODUCT_NAME" >}} types,
the exports of components can take on special internal {{< param "PRODUCT_NAME" >}} types, such as Secrets or Capsules, which expose different functionality.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The following table shows the supported escape sequences.
| `\\` | The `\` character `U+005C` |
| `\a` | The alert or bell character `U+0007` |
| `\b` | The backspace character `U+0008` |
| `\f` | The formfeed character `U+000C` |
| `\f` | The form feed character `U+000C` |
| `\n` | The newline character `U+000A` |
| `\r` | The carriage return character `U+000D` |
| `\t` | The horizontal tab character `U+0009` |
Expand Down Expand Up @@ -176,13 +176,13 @@ The null value is represented by the symbol `null`.

## Special types

#### Secrets
### Secrets

A `secret` is a special type of string that's never displayed to the user.
You can assign `string` values to an attribute expecting a `secret`, but never the inverse.
It's impossible to convert a secret to a string or assign a secret to an attribute expecting a string.

#### Capsules
### Capsules

A `capsule` is a special type that represents a category of _internal_ types used by {{< param "PRODUCT_NAME" >}}.
Each capsule type has a unique name and is represented to the user as `capsule("<SOME_INTERNAL_NAME>")`.
Expand Down
Loading

0 comments on commit 742c35d

Please sign in to comment.