diff --git a/CHANGELOG.md b/CHANGELOG.md index 895522fe07..7a578c798c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,7 +22,9 @@ Main (unreleased) - Add `otelcol.exporter.syslog` component to export logs in syslog format (@dehaansa) -- (_Experimental_) Add a `database_observability.mysql` component to collect mysql performance data. +- (_Experimental_) Add a `database_observability.mysql` component to collect mysql performance data. (@cristiangreco & @matthewnolf) + +- Add `otelcol.receiver.influxdb` to convert influx metric into OTEL. (@EHSchmitt4395) ### Enhancements @@ -32,10 +34,13 @@ Main (unreleased) - Add relevant golang environment variables to the support bundle (@dehaansa) +<<<<<<< HEAD - Add support for server authentication to otelcol components. (@aidaleuc) - Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr) +======= +>>>>>>> origin - Update mysqld_exporter from v0.15.0 to v0.16.0 (including 2ef168bf6), most notable changes: (@cristiangreco) - Support MySQL 8.4 replicas syntax - Fetch lock time and cpu time from performance schema @@ -46,33 +51,57 @@ Main (unreleased) - Change processlist query to support ONLY_FULL_GROUP_BY sql_mode - Add perf_schema quantile columns to collector -- For sharding targets during clustering, `loki.source.podlogs` now only takes into account some labels. (@ptodev) +- Add three new stdlib functions to_base64, from_URLbase64 and to_URLbase64 (@ravishankar15) ### Bugfixes -- Fixed an issue in the `pyroscope.write` component to allow slashes in application names in the same way it is done in the Pyroscope push API (@marcsanmi) -- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa) - -- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar) - Fixed issue with reloading configuration and prometheus metrics duplication in `prometheus.write.queue`. (@mattdurham) -- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum) +- Updated `prometheus.write.queue` to fix issue with TTL comparing different scales of time. (@mattdurham) + +- Fixed an issue in the `prometheus.operator.servicemonitors`, `prometheus.operator.podmonitors` and `prometheus.operator.probes` to support capitalized actions. (@QuentinBisson) + +- Fixed an issue where the `otelcol.processor.interval` could not be used because the debug metrics were not set to default. (@wildum) + +### Other changes + +- Change the stability of the `livedebugging` feature from "experimental" to "generally available". (@wildum) + +- Use Go 1.23.3 for builds. (@mattdurham) + +v1.5.1 +----------------- + +### Enhancements + +- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr) + +- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig) + +- For sharding targets during clustering, `loki.source.podlogs` now only takes into account some labels. (@ptodev) + +### Bugfixes + +- Fixed an issue in the `pyroscope.write` component to prevent TLS connection churn to Pyroscope when the `pyroscope.receive_http` clients don't request keepalive (@madaraszg-tulip) + +- Fixed an issue in the `pyroscope.write` component with multiple endpoints not working correctly for forwarding profiles from `pyroscope.receive_http` (@madaraszg-tulip) - Fixed a few race conditions that could lead to a deadlock when using `import` statements, which could lead to a memory leak on `/metrics` endpoint of an Alloy instance. (@thampiotr) - Fix a race condition where the ui service was dependent on starting after the remotecfg service, which is not guaranteed. (@dehaansa & @erikbaranowski) +- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar) + - `loki.source.podlogs`: Fixed a bug which prevented clustering from working and caused duplicate logs to be sent. The bug only happened when no `selector` or `namespace_selector` blocks were specified in the Alloy configuration. (@ptodev) -- Updated `prometheus.write.queue` to fix issue with TTL comparing different scales of time. (@mattdurham) - -### Other changes +- Fixed an issue in the `pyroscope.write` component to allow slashes in application names in the same way it is done in the Pyroscope push API (@marcsanmi) -- Change the stability of the `livedebugging` feature from "experimental" to "generally available". (@wildum) +- Fixed a crash when updating the configuration of `remote.http`. (@kinolaev) -- Use Go 1.23.3 for builds. (@mattdurham) +- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum) +- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa) v1.5.0 ----------------- @@ -270,6 +299,8 @@ v1.4.0 - Add the label `alloy_cluster` in the metric `alloy_config_hash` when the flag `cluster.name` is set to help differentiate between configs from the same alloy cluster or different alloy clusters. (@wildum) + +- Add support for discovering the cgroup path(s) of a process in `process.discovery`. (@mahendrapaipuri) ### Bugfixes diff --git a/CODEOWNERS b/CODEOWNERS index 4725929a40..750a939cd5 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -20,5 +20,6 @@ /docs/sources/ @clayton-cornell # Components: -/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers -/internal/component/beyla/ @marctc +/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers +/internal/component/beyla/ @marctc +/internal/component/database_observability/ @cristiangreco @matthewnolf diff --git a/docs/sources/_index.md b/docs/sources/_index.md index 213954a9ce..906b4e5837 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -57,7 +57,7 @@ cards: In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][]. {{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][]. You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents. -You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. +You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. {{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both. {{< admonition type="tip" >}} diff --git a/docs/sources/_index.md.t b/docs/sources/_index.md.t index 05c8523e5b..745958c16d 100644 --- a/docs/sources/_index.md.t +++ b/docs/sources/_index.md.t @@ -57,7 +57,7 @@ cards: In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][]. {{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][]. You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents. -You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. +You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. {{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both. {{< admonition type="tip" >}} diff --git a/docs/sources/collect/ecs-opentelemetry-data.md b/docs/sources/collect/ecs-opentelemetry-data.md index 428bf0e926..298bd73bc2 100644 --- a/docs/sources/collect/ecs-opentelemetry-data.md +++ b/docs/sources/collect/ecs-opentelemetry-data.md @@ -84,13 +84,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet * Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics. 1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template. -## Run Alloy directly in your instance, or as a Kubernetes sidecar +## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate. You can also use your own method to connect to the Amazon ECS or AWS Fargate-managed container as long as you can pass the parameters needed to install and configure {{< param "PRODUCT_NAME" >}}. -### Install Grafana Alloy +### Install {{% param "PRODUCT_NAME" %}} After connecting to your instance, follow the {{< param "PRODUCT_NAME" >}} [installation][install], [configuration][configure] and [deployment][deploy] instructions. diff --git a/docs/sources/collect/opentelemetry-data.md b/docs/sources/collect/opentelemetry-data.md index d462408273..651ba3747a 100644 --- a/docs/sources/collect/opentelemetry-data.md +++ b/docs/sources/collect/opentelemetry-data.md @@ -38,7 +38,7 @@ This topic describes how to: Before components can receive OpenTelemetry data, you must have a component responsible for exporting the OpenTelemetry data. An OpenTelemetry _exporter component_ is responsible for writing (exporting) OpenTelemetry data to an external system. -In this task, you use the [otelcol.exporter.otlp][] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP). +In this task, you use the [`otelcol.exporter.otlp`][otelcol.exporter.otlp] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP). After an exporter component is defined, you can use other {{< param "PRODUCT_NAME" >}} components to forward data to it. {{< admonition type="tip" >}} @@ -137,7 +137,7 @@ otelcol.receiver.otlp "example" { } ``` -For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.exporter.otlp][]. +For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.exporter.otlp`][otelcol.exporter.otlp]. ## Configure batching @@ -146,7 +146,7 @@ Instead, data is usually sent to one or more _processor components_ that perform Ensuring data is batched is a production-readiness step to improve data compression and reduce the number of outgoing network requests to external systems. -In this task, you configure an [otelcol.processor.batch][] component to batch data before sending it to the exporter. +In this task, you configure an [`otelcol.processor.batch`][otelcol.processor.batch] component to batch data before sending it to the exporter. {{< admonition type="note" >}} Refer to the list of available [Components][] for the full list of `otelcol.processor` components that you can use to process OpenTelemetry data. @@ -210,14 +210,14 @@ otelcol.exporter.otlp "default" { } ``` -For more information on configuring OpenTelemetry data batching, refer to [otelcol.processor.batch][]. +For more information on configuring OpenTelemetry data batching, refer to [`otelcol.processor.batch`][otelcol.processor.batch]. ## Configure an OpenTelemetry Protocol receiver You can configure {{< param "PRODUCT_NAME" >}} to receive OpenTelemetry metrics, logs, and traces. An OpenTelemetry _receiver_ component is responsible for receiving OpenTelemetry data from an external system. -In this task, you use the [otelcol.receiver.otlp][] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP). +In this task, you use the [`otelcol.receiver.otlp`][otelcol.receiver.otlp] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP). You can configure a receiver component to forward received data to other {{< param "PRODUCT_NAME" >}} components. > Refer to the list of available [Components][] for the full list of @@ -312,7 +312,7 @@ otelcol.exporter.otlp "default" { } ``` -For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.receiver.otlp][]. +For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.receiver.otlp`][otelcol.receiver.otlp]. [OpenTelemetry]: https://opentelemetry.io [Configure an OpenTelemetry Protocol exporter]: #configure-an-opentelemetry-protocol-exporter diff --git a/docs/sources/configure/kubernetes.md b/docs/sources/configure/kubernetes.md index 2e7088b038..4dcb08e1e4 100644 --- a/docs/sources/configure/kubernetes.md +++ b/docs/sources/configure/kubernetes.md @@ -14,7 +14,7 @@ This page describes how to apply a new configuration to {{< param "PRODUCT_NAME" It assumes that: - You have [installed {{< param "PRODUCT_NAME" >}} on Kubernetes using the Helm chart][k8s-install]. -- You already have a new {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation. +- You already have a {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation. Refer to [Collect and forward data][collect] for information about configuring {{< param "PRODUCT_NAME" >}} to collect and forward data. @@ -25,15 +25,15 @@ Refer to [Collect and forward data][collect] for information about configuring { To modify {{< param "PRODUCT_NAME" >}}'s Helm chart configuration, perform the following steps: -1. Create a local `values.yaml` file with a new Helm chart configuration. +1. Create a local `values.yaml` file with a Helm chart configuration. 1. You can use your own copy of the values file or download a copy of the - default [values.yaml][]. + default [`values.yaml`][values.yaml]. 1. Make changes to your `values.yaml` to customize settings for the Helm chart. - Refer to the inline documentation in the default [values.yaml][] for more + Refer to the inline documentation in the default [`values.yaml`][values.yaml] for more information about each option. 1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation: diff --git a/docs/sources/configure/nonroot.md b/docs/sources/configure/nonroot.md index b3b1c8464c..5218fd26b8 100644 --- a/docs/sources/configure/nonroot.md +++ b/docs/sources/configure/nonroot.md @@ -24,7 +24,7 @@ You can configure a non-root user when you deploy {{< param "PRODUCT_NAME" >}} i {{< admonition type="note" >}} Running {{< param "PRODUCT_NAME" >}} as a non-root user won't work if you are using components like [beyla.ebpf][] that require root rights. -[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf +[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/ {{< /admonition >}} To run {{< param "PRODUCT_NAME" >}} as a non-root user, configure a [security context][] for the {{< param "PRODUCT_NAME" >}} container. If you are using the [Grafana Helm chart][] you can add the following snippet to `values.yaml`: @@ -45,6 +45,6 @@ Not really. The Linux kernel prevents Docker containers from accessing host reso However, if there was a bug in the Linux kernel that allowed Docker containers to break out of the virtual environment, it would likely be easier to exploit this bug with a root user than with a non-root user. It's worth noting that the attacker would not only need to find such a Linux kernel bug, but would also need to find a way to make {{< param "PRODUCT_NAME" >}} exploit that bug. [image]: https://hub.docker.com/r/grafana/alloy -[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf +[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/ [security context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ [Grafana Helm chart]: ../../configure/kubernetes/#configure-the-helm-chart diff --git a/docs/sources/configure/windows.md b/docs/sources/configure/windows.md index 3e0875bf6d..df5770517a 100644 --- a/docs/sources/configure/windows.md +++ b/docs/sources/configure/windows.md @@ -116,4 +116,4 @@ To expose the UI to other machines, complete the following steps: To listen on all interfaces, replace _``_ with `0.0.0.0`. [UI]: ../../troubleshoot/debug/#alloy-ui -[environment]: ../../reference/cli/environment-variables +[environment]: ../../reference/cli/environment-variables/ diff --git a/docs/sources/data-collection.md b/docs/sources/data-collection.md index 27d5bd56c4..8aac22b9f7 100644 --- a/docs/sources/data-collection.md +++ b/docs/sources/data-collection.md @@ -31,5 +31,5 @@ All newly reported data is documented in the CHANGELOG. You can use the `--disable-reporting` [command line flag][] to disable the reporting and opt-out of the data collection. -[components]: ../get-started/components -[command line flag]: ../reference/cli/run +[components]: ../get-started/components/ +[command line flag]: ../reference/cli/run/ diff --git a/docs/sources/get-started/clustering.md b/docs/sources/get-started/clustering.md index 4e9e97b623..5290dd702e 100644 --- a/docs/sources/get-started/clustering.md +++ b/docs/sources/get-started/clustering.md @@ -40,8 +40,8 @@ prometheus.scrape "default" { } ``` -A cluster state change is detected when a new node joins or an existing node leaves. -All participating components locally recalculate target ownership and re-balance the number of targets they’re scraping without explicitly communicating ownership over the network. +A cluster state change is detected when a node joins or a node leaves. +All participating components locally recalculate target ownership and re-balance the number of targets they're scraping without explicitly communicating ownership over the network. Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}} deployments to distribute workload during peaks. It also provides resiliency because targets are automatically picked up by one of the node peers if a node leaves. @@ -50,15 +50,15 @@ It also provides resiliency because targets are automatically picked up by one o Refer to component reference documentation to discover whether it supports clustering, such as: -- [prometheus.scrape][] -- [pyroscope.scrape][] -- [prometheus.operator.podmonitors][] -- [prometheus.operator.servicemonitors][] +- [`prometheus.scrape`][prometheus.scrape] +- [`pyroscope.scrape`][pyroscope.scrape] +- [`prometheus.operator.podmonitors`][prometheus.operator.podmonitors] +- [`prometheus.operator.servicemonitors`][prometheus.operator.servicemonitors] ## Cluster monitoring and troubleshooting You can use the {{< param "PRODUCT_NAME" >}} UI [clustering page][] to monitor your cluster status. -Refer to [Debugging clustering issues][debugging] for additional troubleshooting information. +Refer to [Debug clustering issues][debugging] for additional troubleshooting information. [run]: ../../reference/cli/run/#clustering [prometheus.scrape]: ../../reference/components/prometheus/prometheus.scrape/#clustering-block @@ -66,4 +66,4 @@ Refer to [Debugging clustering issues][debugging] for additional troubleshooting [prometheus.operator.podmonitors]: ../../reference/components/prometheus/prometheus.operator.podmonitors/#clustering-block [prometheus.operator.servicemonitors]: ../../reference/components/prometheus/prometheus.operator.servicemonitors/#clustering-block [clustering page]: ../../troubleshoot/debug/#clustering-page -[debugging]: ../../troubleshoot/debug/#debugging-clustering-issues +[debugging]: ../../troubleshoot/debug/#debug-clustering-issues diff --git a/docs/sources/get-started/community_components.md b/docs/sources/get-started/community_components.md index e72be1ee22..1051e48ff7 100644 --- a/docs/sources/get-started/community_components.md +++ b/docs/sources/get-started/community_components.md @@ -9,15 +9,15 @@ weight: 100 __Community components__ are [components][Components] implemented and maintained by the community. -While Grafana does not offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository. +While Grafana doesn't offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository. To use these community components, you must explicitly pass the `--feature.community-components.enabled` flag to the `run` command. -__Community components__ don't have a stability level. They aren't covered by our [backward compatibility strategy][backward-compatibility]. +__Community components__ don't have a stability level. They aren't covered by the [backward compatibility strategy][backward-compatibility]. {{< admonition type="warning" >}} __Community components__ without a maintainer may be disabled or removed if the components prevent or block the development of {{< param "PRODUCT_NAME" >}}. {{< /admonition >}} [Components]: ../components/ -[backward-compatibility]: ../../introduction/backward-compatibility/ \ No newline at end of file +[backward-compatibility]: ../../introduction/backward-compatibility/ diff --git a/docs/sources/get-started/component_controller.md b/docs/sources/get-started/component_controller.md index f335c3cd0a..3c2afc3207 100644 --- a/docs/sources/get-started/component_controller.md +++ b/docs/sources/get-started/component_controller.md @@ -81,7 +81,7 @@ The overall health of a component is determined by combining the controller-repo An individual component's health is independent of the health of any other components it references. A component can be marked as healthy even if it references an exported field of an unhealthy component. -## Handling evaluation failures +## Evaluation failures When a component fails to evaluate, it's marked as unhealthy with the reason for why the evaluation failed. @@ -93,7 +93,7 @@ If your `local.file` component, which watches API keys, suddenly stops working, ## In-memory traffic -Components that expose HTTP endpoints, such as [prometheus.exporter.unix][], can expose an internal address that completely bypasses the network and communicate in-memory. +Components that expose HTTP endpoints, such as [`prometheus.exporter.unix`][prometheus.exporter.unix], can expose an internal address that completely bypasses the network and communicate in-memory. Components within the same process can communicate with one another without needing to be aware of any network-level protections such as authentication or mutual TLS. The internal address defaults to `alloy.internal:12345`. @@ -102,7 +102,7 @@ If this address collides with a real target on your network, change it to someth Components must opt-in to using in-memory traffic. Refer to the individual documentation for components to learn if in-memory traffic is supported. -## Updating the configuration file +## Configuration file updates The `/-/reload` HTTP endpoint and the `SIGHUP` signal can inform the component controller to reload the configuration file. When this happens, the component controller synchronizes the set of running components with the ones in the configuration file, diff --git a/docs/sources/get-started/configuration-syntax/_index.md b/docs/sources/get-started/configuration-syntax/_index.md index 12909a71ff..d5240015b8 100644 --- a/docs/sources/get-started/configuration-syntax/_index.md +++ b/docs/sources/get-started/configuration-syntax/_index.md @@ -61,15 +61,15 @@ loki.write "local_loki" { The {{< param "PRODUCT_NAME" >}} syntax aims to reduce errors in configuration files by making configurations easier to read and write. The {{< param "PRODUCT_NAME" >}} syntax uses blocks, attributes, and expressions. -The blocks can be copied and pasted from the documentation to help you get started as quickly as possible. +You can copy and paste the blocks from the documentation to help you get started as quickly as possible. -The {{< param "PRODUCT_NAME" >}} syntax is declarative, so ordering components, blocks, and attributes does not matter. +The {{< param "PRODUCT_NAME" >}} syntax is declarative, so ordering components, blocks, and attributes doesn't matter. The relationship between components determines the order of operations in the pipeline. ## Blocks You use _Blocks_ to configure components and groups of attributes. -Each block can contain any number of attributes or nested blocks. +Each block can contain any number of attributes or nested blocks. Blocks are steps in the overall pipeline expressed by the configuration. ```alloy @@ -111,7 +111,7 @@ The {{< param "PRODUCT_NAME" >}} syntax supports complex expressions, for exampl You can use expressions for any attribute inside a component definition. -### Referencing component exports +### Reference component exports The most common expression is to reference the exports of a component, for example, `local.file.password_file.content`. You form a reference to a component's exports by merging the component's name (for example, `local.file`), diff --git a/docs/sources/get-started/configuration-syntax/components.md b/docs/sources/get-started/configuration-syntax/components.md index dc54593c9f..479ee60b4d 100644 --- a/docs/sources/get-started/configuration-syntax/components.md +++ b/docs/sources/get-started/configuration-syntax/components.md @@ -52,7 +52,7 @@ local.file "targets" { } ``` -## Referencing components +## Reference components To wire components together, one can use the exports of one as the arguments to another by using references. References can only appear in components. diff --git a/docs/sources/get-started/configuration-syntax/expressions/referencing_exports.md b/docs/sources/get-started/configuration-syntax/expressions/referencing_exports.md index fbae629aad..a456cfc891 100644 --- a/docs/sources/get-started/configuration-syntax/expressions/referencing_exports.md +++ b/docs/sources/get-started/configuration-syntax/expressions/referencing_exports.md @@ -7,7 +7,7 @@ title: Referencing component exports weight: 200 --- -# Referencing component exports +# Reference component exports Referencing exports enables {{< param "PRODUCT_NAME" >}} to configure and connect components dynamically using expressions. While components can work in isolation, they're more useful when one component's behavior and data flow are bound to the exports of another, building a dependency relationship between the two. @@ -15,7 +15,7 @@ While components can work in isolation, they're more useful when one component's Such references can only appear as part of another component's arguments or a configuration block's fields. Components can't reference themselves. -## Using references +## Use references You build references by combining the component's name, label, and named export with dots. @@ -45,7 +45,7 @@ In the preceding example, you wired together a very simple pipeline by writing a {{< figure src="/media/docs/alloy/diagram-referencing-exports.png" alt="Example of a pipeline" >}} -After the value is resolved, it must match the [type][] of the attribute it is assigned to. +After the value is resolved, it must match the [type][] of the attribute it's assigned to. While you can only configure attributes using the basic {{< param "PRODUCT_NAME" >}} types, the exports of components can take on special internal {{< param "PRODUCT_NAME" >}} types, such as Secrets or Capsules, which expose different functionality. diff --git a/docs/sources/get-started/configuration-syntax/expressions/types_and_values.md b/docs/sources/get-started/configuration-syntax/expressions/types_and_values.md index da10f5ccd7..596e6913a3 100644 --- a/docs/sources/get-started/configuration-syntax/expressions/types_and_values.md +++ b/docs/sources/get-started/configuration-syntax/expressions/types_and_values.md @@ -68,7 +68,7 @@ The following table shows the supported escape sequences. | `\\` | The `\` character `U+005C` | | `\a` | The alert or bell character `U+0007` | | `\b` | The backspace character `U+0008` | -| `\f` | The formfeed character `U+000C` | +| `\f` | The form feed character `U+000C` | | `\n` | The newline character `U+000A` | | `\r` | The carriage return character `U+000D` | | `\t` | The horizontal tab character `U+0009` | @@ -176,13 +176,13 @@ The null value is represented by the symbol `null`. ## Special types -#### Secrets +### Secrets A `secret` is a special type of string that's never displayed to the user. You can assign `string` values to an attribute expecting a `secret`, but never the inverse. It's impossible to convert a secret to a string or assign a secret to an attribute expecting a string. -#### Capsules +### Capsules A `capsule` is a special type that represents a category of _internal_ types used by {{< param "PRODUCT_NAME" >}}. Each capsule type has a unique name and is represented to the user as `capsule("")`. diff --git a/docs/sources/get-started/configuration-syntax/syntax.md b/docs/sources/get-started/configuration-syntax/syntax.md index 00297810fd..015d8e747e 100644 --- a/docs/sources/get-started/configuration-syntax/syntax.md +++ b/docs/sources/get-started/configuration-syntax/syntax.md @@ -21,7 +21,7 @@ The language considers all direct and indirect dependencies between elements to ## Identifiers -{{< param "PRODUCT_NAME" >}} syntax considers an identifier as valid if it consists of one or more UTF-8 letters (A through Z, both upper- and lower-case), digits or underscores, but doesn't start with a digit. +{{< param "PRODUCT_NAME" >}} syntax considers an identifier as valid if it consists of one or more UTF-8 letters (A through Z, both upper- and lower-case), digits, or underscores, but doesn't start with a digit. ## Attributes and Blocks @@ -100,7 +100,6 @@ All block and attribute definitions are followed by a newline, which {{< param " A newline is treated as a terminator when it follows any expression, `]`, `)`, or `}`. {{< param "PRODUCT_NAME" >}} ignores other newlines and you can enter as many newlines as you want. -[identifier]: #identifiers [identifier]: #identifiers [expression]: ../expressions/ [type]: ../expressions/types_and_values/ diff --git a/docs/sources/get-started/custom_components.md b/docs/sources/get-started/custom_components.md index 0c75f242cc..42ded3e07f 100644 --- a/docs/sources/get-started/custom_components.md +++ b/docs/sources/get-started/custom_components.md @@ -17,7 +17,7 @@ A custom component is composed of: * _Exports_: Values that a custom component exposes to its consumers. * _Components_: Built-in and custom components that are run as part of the custom component. -## Creating custom components +## Create custom components You can create a new custom component using [the `declare` configuration block][declare]. The label of the block determines the name of the custom component. @@ -33,7 +33,7 @@ To learn how to share custom components across multiple files, refer to [Modules ## Example -This example creates a new custom component called `add`, which exports the sum of two arguments: +This example creates a custom component called `add`, which exports the sum of two arguments: ```alloy declare "add" { @@ -52,6 +52,7 @@ add "example" { // add.example.sum == 32 ``` + [declare]: ../../reference/config-blocks/declare/ [argument]: ../../reference/config-blocks/argument/ [export]: ../../reference/config-blocks/export/ diff --git a/docs/sources/get-started/modules.md b/docs/sources/get-started/modules.md index 393039cdfa..852e7bd94c 100644 --- a/docs/sources/get-started/modules.md +++ b/docs/sources/get-started/modules.md @@ -12,17 +12,17 @@ weight: 400 A _Module_ is a unit of {{< param "PRODUCT_NAME" >}} configuration, which combines all the other concepts, containing a mix of configuration blocks, instantiated components, and custom component definitions. The module passed as an argument to [the `run` command][run] is called the _main configuration_. -Modules can be [imported](#importing-modules) to enable the reuse of [custom components][] defined by that module. +Modules can be [imported](#import-modules) to enable the reuse of [custom components][] defined by that module. -## Importing modules +## Import modules A module can be _imported_, allowing the custom components defined by that module to be used by other modules, called the _importing module_. Modules can be imported from multiple locations using one of the `import` configuration blocks: -* [import.file][]: Imports a module from a file on disk. -* [import.git][]: Imports a module from a file located in a Git repository. -* [import.http][]: Imports a module from the response of an HTTP request. -* [import.string][]: Imports a module from a string. +* [`import.file`][import.file]: Imports a module from a file on disk. +* [`import.git`][import.git]: Imports a module from a file located in a Git repository. +* [`import.http`][import.http]: Imports a module from the response of an HTTP request. +* [`import.string`][import.string]: Imports a module from a string. {{< admonition type="warning" >}} You can't import a module that contains top-level blocks other than `declare` or `import`. @@ -35,7 +35,7 @@ For example, if a configuration contains a block called `import.file "my_module" If an import namespace matches the name of a built-in component namespace, such as `prometheus`, the built-in namespace is hidden from the importing module, and only components defined in the imported module may be used. {{< admonition type="warning" >}} -If you choose a label that corresponds to an existing component for an `import` or a `declare` block, the component will be shadowed and you won't be able to use it in your configuration. +If you choose a label that corresponds to an existing component for an `import` or a `declare` block, the component is shadowed and you won't be able to use it in your configuration. For example, if you use the label `import.file "mimir"`, you won't be able to use the existing components that start with `mimir` such as `mimir.rules.kubernetes` because it refers to the module imported via the `import` block. {{< /admonition >}} @@ -106,9 +106,8 @@ loki.write "default" { ## Security -Since modules can load an arbitrary configuration from a potentially remote source, it is important to carefully consider the security of your solution. -The best practice is to ensure that Alloy configuration cannot be changed by attackers. This includes Alloy's main configuration files as well as -modules fetched from remote locations such as Git repositories or HTTP servers. +Since modules can load an arbitrary configuration from a potentially remote source, it's important to carefully consider the security of your solution. +The best practice is to ensure that the {{< param "PRODUCT_NAME" >}} configuration can't be changed by attackers. This includes the main {{< param "PRODUCT_NAME" >}} configuration files as well as modules fetched from remote locations such as Git repositories or HTTP servers. [custom components]: ../custom_components/ [run]: ../../reference/cli/run/ diff --git a/docs/sources/introduction/supported-platforms.md b/docs/sources/introduction/supported-platforms.md index fb62db4698..b79fde02af 100644 --- a/docs/sources/introduction/supported-platforms.md +++ b/docs/sources/introduction/supported-platforms.md @@ -23,7 +23,7 @@ The following operating systems and hardware architecture are supported. ## macOS * Minimum version: macOS 10.13 or later -* Architectures: AMD64 (Intel), ARM64 (Apple Silicon) +* Architectures: AMD64 on Intel, ARM64 on Apple Silicon ## FreeBSD diff --git a/docs/sources/reference/cli/convert.md b/docs/sources/reference/cli/convert.md index 7c8e2c43ea..0d633fbab2 100644 --- a/docs/sources/reference/cli/convert.md +++ b/docs/sources/reference/cli/convert.md @@ -16,16 +16,14 @@ The `convert` command converts a supported configuration format to the {{< param ## Usage -Usage: - ```shell alloy convert [ ...] ``` - Replace the following: +Replace the following: - * _``_: One or more flags that define the input and output of the command. - * _``_: The {{< param "PRODUCT_NAME" >}} configuration file. +* _``_: One or more flags that define the input and output of the command. +* _``_: The {{< param "PRODUCT_NAME" >}} configuration file. If the _``_ argument isn't provided or if the _``_ argument is equal to `-`, `convert` converts the contents of standard input. Otherwise, `convert` reads and converts the file from disk specified by the argument. @@ -40,13 +38,14 @@ The following flags are supported: * `--output`, `-o`: The filepath and filename where the output is written. * `--report`, `-r`: The filepath and filename where the report is written. -* `--source-format`, `-f`: Required. The format of the source file. Supported formats: [otelcol], [prometheus], [promtail], [static]. +* `--source-format`, `-f`: Required. The format of the source file. Supported formats: [`otelcol`][otelcol], [`prometheus`][prometheus], [`promtail`][promtail], [`static`][static]. * `--bypass-errors`, `-b`: Enable bypassing errors when converting. * `--extra-args`, `e`: Extra arguments from the original format used by the converter. ### Defaults {{< param "PRODUCT_NAME" >}} defaults are managed as follows: + * If a provided source configuration value matches an {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. * If a non-provided source configuration value default matches an {{< param "PRODUCT_NAME" >}} default value, the property is left off the output. * If a non-provided source configuration value default doesn't match an {{< param "PRODUCT_NAME" >}} default value, the default value is included in the output. @@ -54,7 +53,7 @@ The following flags are supported: ### Errors Errors are defined as non-critical issues identified during the conversion where an output can still be generated. -These can be bypassed using the `--bypass-errors` flag. +You can use the `--bypass-errors` flag to bypass these errors. ### OpenTelemetry Collector @@ -71,7 +70,7 @@ Refer to [Migrate from OpenTelemetry Collector to {{< param "PRODUCT_NAME" >}}][ Using the `--source-format=prometheus` will convert the source configuration from [Prometheus v2.45][] to an {{< param "PRODUCT_NAME" >}} configuration. -This includes Prometheus features such as [scrape_config][], [relabel_config][], [metric_relabel_configs][], [remote_write][], and many supported *_sd_configs. +This includes Prometheus features such as [``scrape_config][scrape_config], [`relabel_config`][relabel_config], [`metric_relabel_configs`][metric_relabel_configs], [`remote_write`][remote_write], and many supported `*_sd_configs`. Unsupported features in a source configuration result in [errors][]. Refer to [Migrate from Prometheus to {{< param "PRODUCT_NAME" >}}][migrate prometheus] for a detailed migration guide. @@ -110,6 +109,7 @@ Refer to [Migrate from Grafana Agent Static to {{< param "PRODUCT_NAME" >}}][mig [relabel_config]: https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#relabel_config [metric_relabel_configs]: https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#metric_relabel_configs [remote_write]: https://prometheus.io/docs/prometheus/2.45/configuration/configuration/#remote_write +[Component Reference]: ../../components/otelcol/ [migrate otelcol]: ../../../set-up/migrate/from-otelcol/ [migrate prometheus]: ../../../set-up/migrate/from-prometheus/ [Promtail v2.8.x]: https://grafana.com/docs/loki/v2.8.x/clients/promtail/ diff --git a/docs/sources/reference/cli/environment-variables.md b/docs/sources/reference/cli/environment-variables.md index 2039f9cea6..a0633188bc 100644 --- a/docs/sources/reference/cli/environment-variables.md +++ b/docs/sources/reference/cli/environment-variables.md @@ -28,28 +28,28 @@ Refer to the [Go runtime][runtime] documentation for more information about Go r ## GODEBUG -You can use the `GODEBUG` environment variable to control the debugging variables within the Go runtime. The following arguments are supported. +You can use the `GODEBUG` environment variable to control the debugging variables within the Go runtime. The following arguments are supported. - Argument | Description | Default -------------------------|------------------------------------------------------------------------------------------------------|--------- - `x509usefallbackroots` | Enforce a fallback on the X.509 trusted root certificates. Set to `1` to enable. | `0` - `netdns` | Force a resolver. Set to `go` for a pure Go resolver. Set to `cgo` or `win32` for a native resolver. | - `netdns` | Show resolver debugging information. Set to `1` for basic information. Set to `2` for verbose. | +Argument | Description | Default +-----------------------|------------------------------------------------------------------------------------------------------|-------- +`x509usefallbackroots` | Enforce a fallback on the X.509 trusted root certificates. Set to `1` to enable. | `0` +`netdns` | Force a resolver. Set to `go` for a pure Go resolver. Set to `cgo` or `win32` for a native resolver. | +`netdns` | Show resolver debugging information. Set to `1` for basic information. Set to `2` for verbose. | ## HTTP_PROXY, HTTPS_PROXY, NO_PROXY -You can use the `HTTP_PROXY` environment variable to define the hostname or IP address of the proxy server for HTTP requests. For example, you can set the proxy to `http://proxy.example.com`. +You can use the `HTTP_PROXY` environment variable to define the hostname or IP address of the proxy server for HTTP requests. For example, you can set the proxy to `http://proxy.example.com`. You can use the `HTTPS_PROXY` environment variable to define the proxy server for HTTPS requests in the same manner as `HTTP_PROXY`. The `NO_PROXY` environment variable is used to define any hosts that should be excluded from proxying. `NO_PROXY` should contain a comma delimited list of any of the following options. - Option | Description | Examples -------------------------|----------------------------------------------------------------------------------------------------------------|--------- - IP Address | A single IP address (with optional port) | `1.2.3.4` or `1.2.3.4:80` - CIDR Block | A group of IP addresses that share a network prefix. | `1.2.3.4/8` - Domain | A domain name matches that name and all subdomains. A domain name with a leading "." matches subdomains only. | `example.com` or `.example.com` - Asterisk | A single asterisk indicates that no proxying should be done. | `*` +Option | Description | Examples +-----------|---------------------------------------------------------------------------------------------------------------|-------------------------------- +IP Address | A single IP address (with optional port) | `1.2.3.4` or `1.2.3.4:80` +CIDR Block | A group of IP addresses that share a network prefix. | `1.2.3.4/8` +Domain | A domain name matches that name and all subdomains. A domain name with a leading "." matches subdomains only. | `example.com` or `.example.com` +Asterisk | A single asterisk indicates that no proxying should be done. | `*` ## PPROF_MUTEX_PROFILING_PERCENT @@ -75,28 +75,27 @@ Don't treat the `GOMEMLIMIT` environment variable as a hard memory limit. A rough number is to set `GOMEMLIMIT` to is 90% of the maximum memory required. For example, if you want to keep memory usage below `10GiB`, use `GOMEMLIMIT=9GiB`. -#### Automatically set GOMEMLIMIT +### Automatically set GOMEMLIMIT -The `GOMEMLIMIT` environment variable is either automatically set to 90% of an available `cgroup` value using the [automemlimit] module, or you can explicitly set the `GOMEMLIMIT` environment variable before you run {{< param "PRODUCT_NAME" >}}. +The `GOMEMLIMIT` environment variable is either automatically set to 90% of an available `cgroup` value using the [`automemlimit`][automemlimit] module, or you can explicitly set the `GOMEMLIMIT` environment variable before you run {{< param "PRODUCT_NAME" >}}. You can also change the 90% ratio by setting the `AUTOMEMLIMIT` environment variable to a float value between `0` and `1.0`. -No changes will occur if the limit cannot be determined and you did not explicitly define a `GOMEMLIMIT` value. +No changes occur if the limit can't be determined and you didn't explicitly define a `GOMEMLIMIT` value. ## GOGC The `GOGC` environment variable controls the mechanism that triggers Go's garbage collection. -It represents the garbage collection target percentage. A collection is triggered when the ratio -of freshly allocated data to live data remaining after the previous collection reaches this percentage. +It represents the garbage collection target percentage. +A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. If you don't provide this variable, GOGC defaults to `100`. You can set `GOGC=off` to disable garbage collection. -Configuring this value in conjunction with `GOMEMLIMIT` can help in situations where {{< param "PRODUCT_NAME" >}} -is consuming too much memory. Go provides a [very in-depth guide][gc_guide] to understanding `GOGC` and `GOMEMLIMIT`. +Configuring this value in conjunction with `GOMEMLIMIT` can help in situations where {{< param "PRODUCT_NAME" >}} is consuming too much memory. +Go provides a [very in-depth guide][gc_guide] to understanding `GOGC` and `GOMEMLIMIT`. ## GOMAXPROCS The `GOMAXPROCS` environment variable defines the limit of OS threads that can simultaneously execute user-level Go code. -This limit does not affect the number of threads that can be blocked in system calls on behalf of Go code and those -threads are not counted against `GOMAXPROCS`. +This limit doesn't affect the number of threads that can be blocked in system calls on behalf of Go code and those threads aren't counted against `GOMAXPROCS`. ## GOTRACEBACK @@ -105,16 +104,15 @@ The standard panic output behavior is usually sufficient to debug and resolve an If required, you can use this setting to collect additional information from the runtime. The following values are supported. -Value | Description | Traces include runtime internal functions ------------------|---------------------------------------------------------------------------------|------------------------------------------ - `none` or `0` | Omit goroutine stack traces entirely from the panic output. | - - `single` | Print the stack trace for the current goroutine. | No - `all` or `1` | Print the stack traces for all user-created goroutines. | No - `system` or `2` | Print the stack traces for all user-created and runtime-created goroutines. | Yes - `crash` | Similar to `system`, but also triggers OS-specific additional behavior. For example, on Unix systems, this raises a SIGABRT to trigger a code dump. | Yes - `wer` | Similar to `crash`, but does not disable Windows Error Reporting. | Yes +Value | Description | Traces include runtime internal functions +----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------ +`none` or `0` | Omit goroutine stack traces entirely from the panic output. | - +`single` | Print the stack trace for the current goroutine. | No +`all` or `1` | Print the stack traces for all user-created goroutines. | No +`system` or `2` | Print the stack traces for all user-created and runtime-created goroutines. | Yes +`crash` | Similar to `system`, but also triggers OS-specific additional behavior. For example, on Unix systems, this raises a SIGABRT to trigger a code dump. | Yes +`wer` | Similar to `crash`, but doesn't disable Windows Error Reporting. | Yes [runtime]: https://pkg.go.dev/runtime [automemlimit]: https://github.com/KimMachineGun/automemlimit [gc_guide]: https://tip.golang.org/doc/gc-guide#GOGC -[Windows]: ../../../configure/windows \ No newline at end of file diff --git a/docs/sources/reference/cli/fmt.md b/docs/sources/reference/cli/fmt.md index 2959ba9d98..3fdf9d0097 100644 --- a/docs/sources/reference/cli/fmt.md +++ b/docs/sources/reference/cli/fmt.md @@ -6,22 +6,20 @@ title: The fmt command weight: 200 --- -# The fmt command +# The `fmt` command The `fmt` command formats a given {{< param "PRODUCT_NAME" >}} configuration file. ## Usage -Usage: - ```shell alloy fmt [ ...] ``` - Replace the following: +Replace the following: - * _``_: One or more flags that define the input and output of the command. - * _``_: The {{< param "PRODUCT_NAME" >}} configuration file. +* _``_: One or more flags that define the input and output of the command. +* _``_: The {{< param "PRODUCT_NAME" >}} configuration file. If the _``_ argument isn't provided or if the _``_ argument is equal to `-`, `fmt` formats the contents of standard input. Otherwise, `fmt` reads and formats the file from disk specified by the argument. diff --git a/docs/sources/reference/cli/run.md b/docs/sources/reference/cli/run.md index c154696260..4cbe811690 100644 --- a/docs/sources/reference/cli/run.md +++ b/docs/sources/reference/cli/run.md @@ -6,30 +6,28 @@ title: The run command weight: 300 --- -# The run command +# The `run` command The `run` command runs {{< param "PRODUCT_NAME" >}} in the foreground until an interrupt is received. ## Usage -Usage: - ```shell alloy run [ ...] ``` - Replace the following: +Replace the following: - * _``_: One or more flags that define the input and output of the command. - * _``_: Required. The {{< param "PRODUCT_NAME" >}} configuration file/directory path. +* _``_: One or more flags that define the input and output of the command. +* _``_: Required. The {{< param "PRODUCT_NAME" >}} configuration file/directory path. -If the _``_ argument is not provided, or if the configuration path can't be loaded or contains errors during the initial load, the `run` command will immediately exit and show an error message. +If the _``_ argument isn't provided, or if the configuration path can't be loaded or contains errors during the initial load, the `run` command immediately exits and shows an error message. -If you give the _``_ argument a directory path, {{< param "PRODUCT_NAME" >}} will find `*.alloy` files (ignoring nested directories) and load them as a single configuration source. +If you give the _``_ argument a directory path, {{< param "PRODUCT_NAME" >}} finds `*.alloy` files (ignoring nested directories) and loads them as a single configuration source. However, component names must be **unique** across all {{< param "PRODUCT_NAME" >}} configuration files, and configuration blocks must not be repeated. -{{< param "PRODUCT_NAME" >}} will continue to run if subsequent reloads of the configuration file fail, potentially marking components as unhealthy depending on the nature of the failure. -When this happens, {{< param "PRODUCT_NAME" >}} will continue functioning in the last valid state. +{{< param "PRODUCT_NAME" >}} continues to run if subsequent reloads of the configuration file fail, potentially marking components as unhealthy depending on the nature of the failure. +When this happens, {{< param "PRODUCT_NAME" >}} continues functioning in the last valid state. `run` launches an HTTP server that exposes metrics about itself and its components. The HTTP server is also exposes a UI at `/` for debugging running components. @@ -52,8 +50,8 @@ The following flags are supported: * `--cluster.advertise-interfaces`: List of interfaces used to infer an address to advertise. Set to `all` to use all available network interfaces on the system. (default `"eth0,en0"`). * `--cluster.max-join-peers`: Number of peers to join from the discovered set (default `5`). * `--cluster.name`: Name to prevent nodes without this identifier from joining the cluster (default `""`). -* `--cluster.enable-tls`: Specifies whether TLS should be used for communication between peers (default `false`). -* `--cluster.tls-ca-path`: Path to the CA certificate file used for peer communication over TLS. +* `--cluster.enable-tls`: Specifies whether TLS should be used for communication between peers (default `false`). +* `--cluster.tls-ca-path`: Path to the CA certificate file used for peer communication over TLS. * `--cluster.tls-cert-path`: Path to the certificate file used for peer communication over TLS. * `--cluster.tls-key-path`: Path to the key file used for peer communication over TLS. * `--cluster.tls-server-name`: Server name used for peer communication over TLS. @@ -102,20 +100,20 @@ Refer to [Release life cycle for Grafana Labs](https://grafana.com/docs/release- The `--cluster.enabled` command-line argument starts {{< param "PRODUCT_NAME" >}} in [clustering][] mode. The rest of the `--cluster.*` command-line flags can be used to configure how nodes discover and connect to one another. -Each cluster member’s name must be unique within the cluster. -Nodes which try to join with a conflicting name are rejected and will fall back to bootstrapping a new cluster of their own. +Each cluster member's name must be unique within the cluster. +Nodes which try to join with a conflicting name are rejected and fall back to bootstrapping a new cluster of their own. Peers communicate over HTTP/2 on the built-in HTTP server. Each node must be configured to accept connections on `--server.http.listen-addr` and the address defined or inferred in `--cluster.advertise-address`. If the `--cluster.advertise-address` flag isn't explicitly set, {{< param "PRODUCT_NAME" >}} tries to infer a suitable one from `--cluster.advertise-interfaces`. -If `--cluster.advertise-interfaces` isn't explicitly set, {{< param "PRODUCT_NAME" >}} will infer one from the `eth0` and `en0` local network interfaces. +If `--cluster.advertise-interfaces` isn't explicitly set, {{< param "PRODUCT_NAME" >}} infers one from the `eth0` and `en0` local network interfaces. {{< param "PRODUCT_NAME" >}} will fail to start if it can't determine the advertised address. Since Windows doesn't use the interface names `eth0` or `en0`, Windows users must explicitly pass at least one valid network interface for `--cluster.advertise-interfaces` or a value for `--cluster.advertise-address`. -The comma-separated list of addresses provided in `--cluster.join-addresses` can either be IP addresses or DNS names to lookup (supports SRV and A/AAAA records). -In both cases, the port number can be specified with a `:` suffix. If ports are not provided, default of the port used for the HTTP listener is used. -If you do not provide the port number explicitly, you must ensure that all instances use the same port for the HTTP listener. +The comma-separated list of addresses provided in `--cluster.join-addresses` can either be IP addresses or DNS names to lookup (supports SRV and A/AAAA records). +In both cases, the port number can be specified with a `:` suffix. If ports aren't provided, default of the port used for the HTTP listener is used. +If you don't provide the port number explicitly, you must ensure that all instances use the same port for the HTTP listener. The `--cluster.enable-tls` flag can be set to enable TLS for peer-to-peer communications. Additional arguments are required to configure the TLS client, including the CA certificate, the TLS certificate, the key, and the server name. @@ -134,17 +132,17 @@ To disable this behavior, set the `--cluster.rejoin-interval` flag to `"0s"`. Discovering peers using the `--cluster.join-addresses` and `--cluster.discover-peers` flags only happens on startup. After that, cluster nodes depend on gossiping messages with each other to converge on the cluster's state. -The first node that is used to bootstrap a new cluster (also known as the "seed node") can either omit the flags that specify peers to join or can try to connect to itself. +The first node that's used to bootstrap a new cluster (also known as the "seed node") can either omit the flags that specify peers to join or can try to connect to itself. -To join or rejoin a cluster, {{< param "PRODUCT_NAME" >}} will try to connect to a certain number of peers limited by the `--cluster.max-join-peers` flag. +To join or rejoin a cluster, {{< param "PRODUCT_NAME" >}} tries to connect to a certain number of peers limited by the `--cluster.max-join-peers` flag. This flag can be useful for clusters of significant sizes because connecting to a high number of peers can be an expensive operation. To disable this behavior, set the `--cluster.max-join-peers` flag to 0. -If the value of `--cluster.max-join-peers` is higher than the number of peers discovered, {{< param "PRODUCT_NAME" >}} will connect to all of them. +If the value of `--cluster.max-join-peers` is higher than the number of peers discovered, {{< param "PRODUCT_NAME" >}} connects to all of them. The `--cluster.name` flag can be used to prevent clusters from accidentally merging. -When `--cluster.name` is provided, nodes will only join peers who share the same cluster name value. +When `--cluster.name` is provided, nodes only join peers who share the same cluster name value. By default, the cluster name is empty, and any node that doesn't set the flag can join. -Attempting to join a cluster with a wrong `--cluster.name` will result in a "failed to join memberlist" error. +Attempting to join a cluster with a wrong `--cluster.name` results in a "failed to join memberlist" error. ### Clustering states @@ -152,7 +150,7 @@ Clustered {{< param "PRODUCT_NAME" >}}s are in one of three states: * **Viewer**: {{< param "PRODUCT_NAME" >}} has a read-only view of the cluster and isn't participating in workload distribution. * **Participant**: {{< param "PRODUCT_NAME" >}} is participating in workload distribution for components that have clustering enabled. -* **Terminating**: {{< param "PRODUCT_NAME" >}} is shutting down and will no longer assign new work to itself. +* **Terminating**: {{< param "PRODUCT_NAME" >}} is shutting down and no longer assigning new work to itself. Each {{< param "PRODUCT_NAME" >}} initially joins the cluster in the viewer state and then transitions to the participant state after the process startup completes. Each {{< param "PRODUCT_NAME" >}} then transitions to the terminating state when shutting down. @@ -166,10 +164,8 @@ The current state of a clustered {{< param "PRODUCT_NAME" >}} is shown on the cl When you use the `--config.format` command-line argument with a value other than `alloy`, {{< param "PRODUCT_NAME" >}} converts the configuration file from the source format to {{< param "PRODUCT_NAME" >}} and immediately starts running with the new configuration. This conversion uses the converter API described in the [alloy convert][] docs. -If you include the `--config.bypass-conversion-errors` command-line argument, -{{< param "PRODUCT_NAME" >}} will ignore any errors from the converter. Use this argument -with caution because the resulting conversion may not be equivalent to the -original configuration. +If you include the `--config.bypass-conversion-errors` command-line argument, {{< param "PRODUCT_NAME" >}} ignores errors from the converter. +Use this argument with caution because the resulting conversion may not be equivalent to the original configuration. Include `--config.extra-args` to pass additional command line flags from the original format to the converter. Refer to [alloy convert][] for more details on how `extra-args` work. @@ -179,7 +175,6 @@ Refer to [alloy convert][] for more details on how `extra-args` work. [go-discover]: https://github.com/hashicorp/go-discover [in-memory HTTP traffic]: ../../../get-started/component_controller/#in-memory-traffic [data collection]: ../../../data-collection/ -[support bundle]: ../../../troubleshoot/support_bundle -[components]: ../../get-started/components/ +[support bundle]: ../../../troubleshoot/support_bundle/ [component controller]: ../../../get-started/component_controller/ [UI]: ../../../troubleshoot/debug/#clustering-page diff --git a/docs/sources/reference/cli/tools.md b/docs/sources/reference/cli/tools.md index 1fd402af18..8c62607dac 100644 --- a/docs/sources/reference/cli/tools.md +++ b/docs/sources/reference/cli/tools.md @@ -6,7 +6,7 @@ title: The tools command weight: 400 --- -# The tools command +# The `tools` command The `tools` command contains command line tooling grouped by {{< param "PRODUCT_NAME" >}} component. @@ -18,16 +18,14 @@ Utilities in this command have no backward compatibility guarantees and may chan ### prometheus.remote_write sample-stats -Usage: - ```shell alloy tools prometheus.remote_write sample-stats [ ...] ``` - Replace the following: +Replace the following: - * _``_: One or more flags that define the input and output of the command. - * _``_: The WAL directory. +* _``_: One or more flags that define the input and output of the command. +* _``_: The WAL directory. The `sample-stats` command reads the Write-Ahead Log (WAL) specified by _``+ and collects information on metric samples within it. @@ -37,7 +35,7 @@ For each metric discovered, `sample-stats` emits: * The timestamp of the newest sample received for that metric. * The total number of samples discovered for that metric. -By default, `sample-stats` will return information for every metric in the WAL. +By default, `sample-stats` returns information for every metric in the WAL. You can pass the `--selector` flag to filter the reported metrics to a smaller set. The following flag is supported: @@ -46,8 +44,6 @@ The following flag is supported: ### prometheus.remote_write target-stats -Usage: - ```shell alloy tools prometheus.remote_write target-stats --job JOB --instance INSTANCE WAL_DIRECTORY ``` @@ -65,15 +61,13 @@ The `--job` and `--instance` labels are required. ### prometheus.remote_write wal-stats -Usage: - ```shell alloy tools prometheus.remote_write wal-stats ``` - Replace the following: +Replace the following: - * _``_: The WAL directory. +* _``_: The WAL directory. The `wal-stats` command reads the Write-Ahead Log (WAL) specified by _``_ and collects general information about it. @@ -92,4 +86,4 @@ The following information is reported: Additionally, `wal-stats` reports per-target information, where a target is defined as a unique combination of the `job` and `instance` label values. For each target, `wal-stats` reports the number of series and the number of metric samples associated with that target. -The `wal-stats` command does not support any flags. +The `wal-stats` command doesn't support any flags. diff --git a/docs/sources/reference/compatibility/_index.md b/docs/sources/reference/compatibility/_index.md index 432f2b8dd6..da5471fbfb 100644 --- a/docs/sources/reference/compatibility/_index.md +++ b/docs/sources/reference/compatibility/_index.md @@ -17,6 +17,7 @@ The value of an attribute may matter as well as its type. Refer to each component's documentation for more details on what values are acceptable. For example: + * A Prometheus component may always expect an `"__address__"` label inside a list of targets. * A `string` argument may only accept certain values like "traceID" or "spanID". {{< /admonition >}} @@ -117,9 +118,9 @@ The following components, grouped by namespace, _export_ Targets. - ### Targets Consumers + The following components, grouped by namespace, _consume_ Targets. @@ -155,7 +156,6 @@ The following components, grouped by namespace, _consume_ Targets. - ## Prometheus `MetricsReceiver` The Prometheus metrics are sent between components using `MetricsReceiver`s. @@ -362,6 +362,7 @@ The following components, grouped by namespace, _consume_ OpenTelemetry `otelcol - [otelcol.processor.transform](../components/otelcol/otelcol.processor.transform) - [otelcol.receiver.datadog](../components/otelcol/otelcol.receiver.datadog) - [otelcol.receiver.file_stats](../components/otelcol/otelcol.receiver.file_stats) +- [otelcol.receiver.influxdb](../components/otelcol/otelcol.receiver.influxdb) - [otelcol.receiver.jaeger](../components/otelcol/otelcol.receiver.jaeger) - [otelcol.receiver.kafka](../components/otelcol/otelcol.receiver.kafka) - [otelcol.receiver.loki](../components/otelcol/otelcol.receiver.loki) diff --git a/docs/sources/reference/components/discovery/discovery.process.md b/docs/sources/reference/components/discovery/discovery.process.md index 1ca36e73dd..2ab527316e 100644 --- a/docs/sources/reference/components/discovery/discovery.process.md +++ b/docs/sources/reference/components/discovery/discovery.process.md @@ -111,6 +111,7 @@ The following arguments are supported: | `commandline` | `bool` | A flag to enable discovering `__meta_process_commandline` label. | true | no | | `uid` | `bool` | A flag to enable discovering `__meta_process_uid`: label. | true | no | | `username` | `bool` | A flag to enable discovering `__meta_process_username`: label. | true | no | +| `cgroup_path` | `bool` | A flag to enable discovering `__meta_cgroup_path__` label. | false | no | | `container_id` | `bool` | A flag to enable discovering `__container_id__` label. | true | no | ## Exported fields @@ -129,6 +130,7 @@ Each target includes the following labels: * `__meta_process_commandline`: The process command line. Taken from `/proc//cmdline`. * `__meta_process_uid`: The process UID. Taken from `/proc//status`. * `__meta_process_username`: The process username. Taken from `__meta_process_uid` and `os/user/LookupID`. +* `__meta_cgroup_path`: The cgroup path under which the process is running. In the case of cgroups v1, this label includes all the controllers paths delimited by `|`. * `__container_id__`: The container ID. Taken from `/proc//cgroup`. If the process is not running in a container, this label is not set. ## Component health @@ -157,6 +159,7 @@ discovery.process "all" { commandline = true username = true uid = true + cgroup_path = true container_id = true } } @@ -187,6 +190,34 @@ discovery.process "all" { } } +### Example discovering processes on the local host based on `cgroups` path + +The following example configuration shows you how to discover processes running under systemd services on the local host. + +```alloy +discovery.process "all" { + refresh_interval = "60s" + discover_config { + cwd = true + exe = true + commandline = true + username = true + uid = true + cgroup_path = true + container_id = true + } +} + +discovery.relabel "systemd_services" { + targets = discovery.process.all.targets + // Only keep the targets that correspond to systemd services + rule { + action = "keep" + regex = "^.*/([a-zA-Z0-9-_]+).service(?:.*$)" + source_labels = ["__meta_cgroup_id"] + } +} + ``` diff --git a/docs/sources/reference/components/otelcol/otelcol.processor.tail_sampling.md b/docs/sources/reference/components/otelcol/otelcol.processor.tail_sampling.md index de0e94fc04..3561e807d6 100644 --- a/docs/sources/reference/components/otelcol/otelcol.processor.tail_sampling.md +++ b/docs/sources/reference/components/otelcol/otelcol.processor.tail_sampling.md @@ -12,17 +12,10 @@ title: otelcol.processor.tail_sampling policies. All spans for a given trace _must_ be received by the same collector instance for effective sampling decisions. -The `tail_sampling` component uses both soft and hard limits, where the hard limit -is always equal or larger than the soft limit. When memory usage goes above the -soft limit, the processor component drops data and returns errors to the -preceding components in the pipeline. When usage exceeds the hard -limit, the processor forces a garbage collection in order to try and free -memory. When usage is below the soft limit, no data is dropped and no forced -garbage collection is performed. - -> **Note**: `otelcol.processor.tail_sampling` is a wrapper over the upstream -> OpenTelemetry Collector Contrib `tail_sampling` processor. Bug reports or feature -> requests will be redirected to the upstream repository, if necessary. +{{< admonition type="note" >}} +`otelcol.processor.tail_sampling` is a wrapper over the upstream OpenTelemetry Collector Contrib `tail_sampling` processor. +Bug reports or feature requests will be redirected to the upstream repository, if necessary. +{{< /admonition >}} Multiple `otelcol.processor.tail_sampling` components can be specified by giving them different labels. diff --git a/docs/sources/reference/components/otelcol/otelcol.receiver.influxdb.md b/docs/sources/reference/components/otelcol/otelcol.receiver.influxdb.md new file mode 100644 index 0000000000..465eb67e20 --- /dev/null +++ b/docs/sources/reference/components/otelcol/otelcol.receiver.influxdb.md @@ -0,0 +1,167 @@ +--- +canonical: https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.receiver.influxdb/ +description: Learn about otelcol.receiver.influxdb +title: otelcol.receiver.influxdb +--- + +# otelcol.receiver.influxdb + +`otelcol.receiver.influxdb` receives InfluxDB metrics, converts them into OpenTelemetry (OTEL) format, and forwards them to other `otelcol.*` components over the network. + +You can specify multiple `otelcol.receiver.influxdb` components by giving them different labels. + +## Usage + +```alloy +otelcol.receiver.influxdb "influxdb_metrics" { + endpoint = "localhost:8086" // InfluxDB metrics ingestion endpoint + + output { + metrics = [...] + } +} +``` + +## Arguments + +`otelcol.receiver.influxdb` supports the following arguments: + +| Name | Type | Description | Default | Required | +| ------------------------ | -------------- | --------------------------------------------------------------- | ---------------------------------------------------------- | -------- | +| `endpoint` | `string` | `host:port` to listen for traffic on. | `"localhost:8086"` | no | +| `max_request_body_size` | `string` | Maximum request body size the server will allow. | `20MiB` | no | +| `include_metadata` | `boolean` | Propagate incoming connection metadata to downstream consumers. | | no | +| `compression_algorithms` | `list(string)` | A list of compression algorithms the server can accept. | `["", "gzip", "zstd", "zlib", "snappy", "deflate", "lz4"]` | no | + +By default, `otelcol.receiver.influxdb` listens for HTTP connections on `localhost`. +To expose the HTTP server to other machines on your network, configure `endpoint` with the IP address to listen on, or `0.0.0.0:8086` to listen on all network interfaces. + +## Blocks + +The following blocks are supported inside the definition of `otelcol.receiver.influxdb`: + +| Hierarchy | Block | Description | Required | +| ------------- | ----------------- | ----------------------------------------------------- | -------- | +| tls | [tls][] | Configures TLS for the HTTP server. | no | +| cors | [cors][] | Configures CORS for the HTTP server. | no | +| debug_metrics | [debug_metrics][] | Configures the metrics that this component generates. | no | +| output | [output][] | Configures where to send received metrics. | yes | + +[tls]: #tls-block +[cors]: #cors-block +[debug_metrics]: #debug_metrics-block +[output]: #output-block + +### tls block + +The `tls` block configures TLS settings used for a server. If the `tls` block +isn't provided, TLS won't be used for connections to the server. + +{{< docs/shared lookup="reference/components/otelcol-tls-server-block.md" source="alloy" version="" >}} + +### cors block + +The `cors` block configures CORS settings for an HTTP server. + +The following arguments are supported: + +| Name | Type | Description | Default | Required | +| ----------------- | -------------- | ---------------------------------------- | ---------------------- | -------- | +| `allowed_origins` | `list(string)` | Allowed values for the `Origin` header. | | no | +| `allowed_headers` | `list(string)` | Accepted headers from CORS requests. | `["X-Requested-With"]` | no | +| `max_age` | `number` | Configures the `Access-Control-Max-Age`. | | no | + +The `allowed_headers` argument specifies which headers are acceptable from a +CORS request. The following headers are always implicitly allowed: + +* `Accept` +* `Accept-Language` +* `Content-Type` +* `Content-Language` + +If `allowed_headers` includes `"*"`, all headers are permitted. + +### debug_metrics block + +{{< docs/shared lookup="reference/components/otelcol-debug-metrics-block.md" source="alloy" version="" >}} + +### output block + +{{< docs/shared lookup="reference/components/output-block.md" source="alloy" version="" >}} + +## Exported fields + +`otelcol.receiver.influxdb` doesn't export any fields. + +## Component health + +`otelcol.receiver.influxdb` is only reported as unhealthy if given an invalid configuration. + +## Debug information + +`otelcol.receiver.influxdb` doesn't expose any component-specific debug information. + +## Example + +This example forwards received telemetry through a batch processor before finally sending it to an OTLP-capable endpoint: + +```alloy +otelcol.receiver.influxdb "influxdb_metrics" { + output { + metrics = [otelcol.processor.batch.default.input] + } +} + +otelcol.processor.batch "default" { + output { + metrics = [otelcol.exporter.otlp.default.input] + } +} + +otelcol.exporter.otlp "default" { + client { + endpoint = sys.env("OTLP_ENDPOINT") + } +} +``` + +This example forwards received telemetry to Prometheus Remote Write (Mimir): + +```alloy +otelcol.receiver.influxdb "influxdb_metrics" { + output { + metrics = [otelcol.exporter.prometheus.influx_output.input] // Forward metrics to Prometheus exporter + } +} + +otelcol.exporter.prometheus "influx_output" { + forward_to = [prometheus.remote_write.mimir.receiver] // Forward metrics to Prometheus remote write (Mimir) +} + +prometheus.remote_write "mimir" { + endpoint { + url = "https://prometheus-xxx.grafana.net/api/prom/push" + + basic_auth { + username = "xxxxx" + password = "xxxx==" + } + } +} +``` + + + +## Compatible components + +`otelcol.receiver.influxdb` can accept arguments from the following components: + +- Components that export [OpenTelemetry `otelcol.Consumer`](../../../compatibility/#opentelemetry-otelcolconsumer-exporters) + + +{{< admonition type="note" >}} +Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. +Refer to the linked documentation for more details. +{{< /admonition >}} + + diff --git a/docs/sources/reference/components/prometheus/prometheus.exporter.windows.md b/docs/sources/reference/components/prometheus/prometheus.exporter.windows.md index 9e75737c34..816b15828f 100644 --- a/docs/sources/reference/components/prometheus/prometheus.exporter.windows.md +++ b/docs/sources/reference/components/prometheus/prometheus.exporter.windows.md @@ -245,8 +245,11 @@ By default, `text_file_directory` is set to the `textfile_inputs` directory in t For example, if {{< param "PRODUCT_NAME" >}} is installed in `C:\Program Files\GrafanaLabs\Alloy\`, the default will be `C:\Program Files\GrafanaLabs\Alloy\textfile_inputs`. -When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. Each `.prom` file found must end with an empty line feed to work properly. +When `text_file_directory` is set, only files with the extension `.prom` inside the specified directory are read. +{{< admonition type="note" >}} +The `.prom` files must end with an empty line feed for the component to recognize and read them. +{{< /admonition >}} ## Exported fields diff --git a/docs/sources/reference/config-blocks/declare.md b/docs/sources/reference/config-blocks/declare.md index 56e7638610..82c993d82e 100644 --- a/docs/sources/reference/config-blocks/declare.md +++ b/docs/sources/reference/config-blocks/declare.md @@ -71,5 +71,5 @@ prometheus.remote_write "example" { [argument]: ../argument/ [export]: ../export/ [declare]: ../declare/ -[import]: ../../../get-started/modules/#importing-modules +[import]: ../../../get-started/modules/#import-modules [custom component]: ../../../get-started/custom_components/ diff --git a/docs/sources/reference/config-blocks/http.md b/docs/sources/reference/config-blocks/http.md index fcf6a986cf..f8ffb9acf8 100644 --- a/docs/sources/reference/config-blocks/http.md +++ b/docs/sources/reference/config-blocks/http.md @@ -41,8 +41,8 @@ tls > windows_certificate_filter > server | [server][] | Con The `tls` block configures TLS settings for the HTTP server. {{< admonition type="warning" >}} -If you add the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over plaintext. -Similarly, if you remove the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections will continue communicating over TLS. +If you add the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections continue communicating over plaintext. +Similarly, if you remove the `tls` block and reload the configuration when {{< param "PRODUCT_NAME" >}} is running, existing connections continue communicating over TLS. To ensure all connections use TLS, configure the `tls` block before you start {{< param "PRODUCT_NAME" >}}. {{< /admonition >}} @@ -70,54 +70,54 @@ The following pairs of arguments are mutually exclusive, and only one may be con * `client_ca_pem` and `client_ca_file` The `client_auth_type` argument determines whether to validate client certificates. -The default value, `NoClientCert`, indicates that the client certificate is not validated. +The default value, `NoClientCert`, indicates that the client certificate isn't validated. The `client_ca_pem` and `client_ca_file` arguments may only be configured when `client_auth_type` is not `NoClientCert`. The following values are accepted for `client_auth_type`: * `NoClientCert`: client certificates are neither requested nor validated. -* `RequestClientCert`: requests clients to send an optional certificate. Certificates provided by clients are not validated. -* `RequireAnyClientCert`: requires at least one certificate from clients. Certificates provided by clients are not validated. +* `RequestClientCert`: requests clients to send an optional certificate. Certificates provided by clients aren't validated. +* `RequireAnyClientCert`: requires at least one certificate from clients. Certificates provided by clients aren't validated. * `VerifyClientCertIfGiven`: requests clients to send an optional certificate. If a certificate is sent, it must be valid. * `RequireAndVerifyClientCert`: requires clients to send a valid certificate. The `client_ca_pem` or `client_ca_file` arguments may be used to perform client certificate validation. -These arguments may only be provided when `client_auth_type` is not set to `NoClientCert`. +These arguments may only be provided when `client_auth_type` isn't set to `NoClientCert`. The `cipher_suites` argument determines what cipher suites to use. If you don't provide cipher suite, a default list is used. The set of cipher suites specified may be from the following: -| Cipher | Allowed in BoringCrypto builds | -| ----------------------------------------------- | ------------------------------ | -| `TLS_RSA_WITH_AES_128_CBC_SHA` | no | -| `TLS_RSA_WITH_AES_256_CBC_SHA` | no | -| `TLS_RSA_WITH_AES_128_GCM_SHA256` | yes | -| `TLS_RSA_WITH_AES_256_GCM_SHA384` | yes | -| `TLS_AES_128_GCM_SHA256` | no | -| `TLS_AES_256_GCM_SHA384` | no | -| `TLS_CHACHA20_POLY1305_SHA256` | no | -| `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA` | no | -| `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA` | no | -| `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | no | -| `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA` | no | -| `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` | yes | -| `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` | yes | -| `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256` | yes | -| `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` | yes | -| `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256` | no | -| `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` | no | +Cipher | Allowed in BoringCrypto builds +------------------------------------------------|------------------------------- +`TLS_RSA_WITH_AES_128_CBC_SHA` | no +`TLS_RSA_WITH_AES_256_CBC_SHA` | no +`TLS_RSA_WITH_AES_128_GCM_SHA256` | yes +`TLS_RSA_WITH_AES_256_GCM_SHA384` | yes +`TLS_AES_128_GCM_SHA256` | no +`TLS_AES_256_GCM_SHA384` | no +`TLS_CHACHA20_POLY1305_SHA256` | no +`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA` | no +`TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA` | no +`TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA` | no +`TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA` | no +`TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` | yes +`TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` | yes +`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256` | yes +`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384` | yes +`TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256` | no +`TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` | no The `curve_preferences` argument determines the set of elliptic curves to prefer during a handshake in preference order. If not provided, a default list is used. The set of elliptic curves specified may be from the following: -| Curve | Allowed in BoringCrypto builds | -| ----------- | ------------------------------ | -| `CurveP256` | yes | -| `CurveP384` | yes | -| `CurveP521` | yes | -| `X25519` | no | +Curve | Allowed in BoringCrypto builds +------------|------------------------------- +`CurveP256` | yes +`CurveP384` | yes +`CurveP521` | yes +`X25519` | no The `min_version` and `max_version` arguments determine the oldest and newest TLS version that's acceptable from clients. If you don't provide the min and max TLS version, a default value is used. @@ -129,7 +129,6 @@ The following versions are recognized: * `TLS11` for TLS 1.1 * `TLS10` for TLS 1.0 - ### windows certificate filter block The `windows_certificate_filter` block is used to configure retrieving certificates from the built-in Windows certificate store. @@ -149,7 +148,6 @@ TLS min and max may not be compatible with the certificate stored in the Windows The `windows_certificate_filter` serves the certificate even if it isn't compatible with the specified TLS version. {{< /admonition >}} - ### server block The `server` block is used to find the certificate to check the signer. @@ -163,8 +161,6 @@ Name | Type | Description `template_id` | `string` | Server Template ID to match in ASN1 format, for example, "1.2.3". | `""` | no `refresh_interval` | `string` | How often to check for a new server certificate. | `"5m"` | no - - ### client block The `client` block is used to check the certificate presented to the server. diff --git a/docs/sources/reference/config-blocks/import.file.md b/docs/sources/reference/config-blocks/import.file.md index a0accf8dff..f08674e954 100644 --- a/docs/sources/reference/config-blocks/import.file.md +++ b/docs/sources/reference/config-blocks/import.file.md @@ -43,6 +43,7 @@ The following arguments are supported: This example imports a module from a file and instantiates a custom component from the import that adds two numbers: main.alloy + ```alloy import.file "math" { filename = "module.alloy" @@ -55,6 +56,7 @@ math.add "default" { ``` module.alloy + ```alloy declare "add" { argument "a" {} @@ -66,11 +68,12 @@ declare "add" { } ``` -### Import a module in a module imported via import.git +### Import a module in a module imported via import.git -This example imports a module from a file inside of a module that is imported via [import.git][]: +This example imports a module from a file inside of a module that's imported via [import.git][]: main.alloy + ```alloy import.git "math" { repository = "https://github.com/wildum/module.git" @@ -84,8 +87,8 @@ math.add "default" { } ``` - relative_math.alloy + ```alloy import.file "lib" { filename = file.path_join(module_path, "lib.alloy") @@ -107,6 +110,7 @@ declare "add" { ``` lib.alloy + ```alloy declare "plus" { argument "a" {} @@ -118,9 +122,9 @@ declare "plus" { } ``` -### Import a module in a module imported via import.file +### Import a module in a module imported via import.file -This example imports a module from a file inside of a module that is imported via another `import.file`: +This example imports a module from a file inside of a module that's imported via another `import.file`: main.alloy @@ -136,6 +140,7 @@ math.add "default" { ``` relative_math.alloy + ```alloy import.file "lib" { filename = file.path_join(module_path, "lib.alloy") @@ -157,6 +162,7 @@ declare "add" { ``` lib.alloy + ```alloy declare "plus" { argument "a" {} @@ -168,7 +174,5 @@ declare "plus" { } ``` - - [file.path_join]: ../../stdlib/file/ -[import.git]: ../import.git/ \ No newline at end of file +[import.git]: ../import.git/ diff --git a/docs/sources/reference/config-blocks/import.git.md b/docs/sources/reference/config-blocks/import.git.md index 6aad5cd069..0145c1adbf 100644 --- a/docs/sources/reference/config-blocks/import.git.md +++ b/docs/sources/reference/config-blocks/import.git.md @@ -40,8 +40,7 @@ When provided, the `revision` attribute must be set to a valid branch, tag, or c You must set the `path` attribute to a path accessible from the repository's root. It can either be an {{< param "PRODUCT_NAME" >}} configuration file such as `FILE_NAME.alloy` or `DIR_NAME/FILE_NAME.alloy` or -a directory containing {{< param "PRODUCT_NAME" >}} configuration files such as `DIR_NAME` or `.` if the {{< param "PRODUCT_NAME" >}} configuration files are stored at the root -of the repository. +a directory containing {{< param "PRODUCT_NAME" >}} configuration files such as `DIR_NAME` or `.` if the {{< param "PRODUCT_NAME" >}} configuration files are stored at the root of the repository. If `pull_frequency` isn't `"0s"`, the Git repository is pulled for updates at the frequency specified. If it's set to `"0s"`, the Git repository is pulled once on init. diff --git a/docs/sources/reference/config-blocks/import.http.md b/docs/sources/reference/config-blocks/import.http.md index 552581851b..c03d2986f5 100644 --- a/docs/sources/reference/config-blocks/import.http.md +++ b/docs/sources/reference/config-blocks/import.http.md @@ -79,6 +79,7 @@ The `tls_config` block configures TLS settings for connecting to HTTPS servers. This example imports custom components from an HTTP response and instantiates a custom component for adding two numbers: module.alloy + ```alloy declare "add" { argument "a" {} @@ -91,6 +92,7 @@ declare "add" { ``` main.alloy + ```alloy import.http "math" { url = SERVER_URL @@ -102,9 +104,8 @@ math.add "default" { } ``` - [client]: #client-block [basic_auth]: #basic_auth-block [authorization]: #authorization-block [oauth2]: #oauth2-block -[tls_config]: #tls_config-block \ No newline at end of file +[tls_config]: #tls_config-block diff --git a/docs/sources/reference/config-blocks/remotecfg.md b/docs/sources/reference/config-blocks/remotecfg.md index 9807b4ec52..fba4534700 100644 --- a/docs/sources/reference/config-blocks/remotecfg.md +++ b/docs/sources/reference/config-blocks/remotecfg.md @@ -36,23 +36,23 @@ remotecfg { The following arguments are supported: -Name | Type | Description | Default | Required ------------------|----------------------|---------------------------------------------------|-------------|--------- -`url` | `string` | The address of the API to poll for configuration. | `""` | no -`id` | `string` | A self-reported ID. | `see below` | no -`attributes` | `map(string)` | A set of self-reported attributes. | `{}` | no -`poll_frequency` | `duration` | How often to poll the API for new configuration. | `"1m"` | no -`name` | `string` | A human-readable name for the collector. | `""` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no - -If the `url` is not set, then the service block is a no-op. +Name | Type | Description | Default | Required +-------------------------|---------------------|--------------------------------------------------------------------------------------------------|-------------|--------- +`url` | `string` | The address of the API to poll for configuration. | `""` | no +`id` | `string` | A self-reported ID. | `see below` | no +`attributes` | `map(string)` | A set of self-reported attributes. | `{}` | no +`poll_frequency` | `duration` | How often to poll the API for new configuration. | `"1m"` | no +`name` | `string` | A human-readable name for the collector. | `""` | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`proxy_url` | `string` | HTTP proxy to send requests through. | | no +`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no +`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no +`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no + +If the `url` isn't set, then the service block is a no-op. If not set, the self-reported `id` that {{< param "PRODUCT_NAME" >}} uses is a randomly generated, anonymous unique ID (UUID) that is stored as an `alloy_seed.json` file in the {{< param "PRODUCT_NAME" >}} storage path so that it can persist across restarts. You can use the `name` field to set another human-friendly identifier for the specific {{< param "PRODUCT_NAME" >}} instance. @@ -69,12 +69,13 @@ You can't override this prefix. The `poll_frequency` must be set to at least `"10s"`. - At most, one of the following can be provided: - - [`bearer_token` argument][arguments]. - - [`bearer_token_file` argument][arguments]. - - [`basic_auth` block][basic_auth]. - - [`authorization` block][authorization]. - - [`oauth2` block][oauth2]. +At most, one of the following can be provided: + +* [`bearer_token` argument][arguments]. +* [`bearer_token_file` argument][arguments]. +* [`basic_auth` block][basic_auth]. +* [`authorization` block][authorization]. +* [`oauth2` block][oauth2]. {{< docs/shared lookup="reference/components/http-client-proxy-config-description.md" source="alloy" version="" >}} diff --git a/docs/sources/reference/config-blocks/tracing.md b/docs/sources/reference/config-blocks/tracing.md index 53a7c80b09..b81fe1afdd 100644 --- a/docs/sources/reference/config-blocks/tracing.md +++ b/docs/sources/reference/config-blocks/tracing.md @@ -40,10 +40,9 @@ Name | Type | Description `sampling_fraction` | `number` | Fraction of traces to keep. | `0.1` | no `write_to` | `list(otelcol.Consumer)` | Inputs from `otelcol` components to send traces to. | `[]` | no -The `write_to` argument controls which components to send traces to for -processing. The elements in the array can be any `otelcol` component that -accept traces, including processors and exporters. When `write_to` is set -to an empty array `[]`, all traces are dropped. +The `write_to` argument controls which components to send traces to for processing. +The elements in the array can be any `otelcol` component that accept traces, including processors and exporters. +When `write_to` is set to an empty array `[]`, all traces are dropped. {{< admonition type="note" >}} Any traces generated before the `tracing` block has been evaluated,such as at the early start of the process' lifetime, are dropped. @@ -84,7 +83,7 @@ The remote sampling strategies are retrieved from the URL specified by the `url` Requests to the remote sampling strategies server are made through an HTTP `GET` request to the configured `url` argument. A `service=alloy` query parameter is always added to the URL to allow the server to respond with service-specific strategies. -The HTTP response body is read as JSON matching the schema specified by Jaeger's [`strategies.json` file][Jaeger sampling strategies]. +The HTTP response body is read as JSON matching the schema specified in the Jaeger [`strategies.json` file][Jaeger sampling strategies]. The `max_operations` limits the amount of custom span names that can have custom sampling rules. If the remote sampling strategy exceeds the limit, sampling decisions fall back to the default sampler. diff --git a/docs/sources/reference/stdlib/array.md b/docs/sources/reference/stdlib/array.md index b9fb947bdc..69cb90bd61 100644 --- a/docs/sources/reference/stdlib/array.md +++ b/docs/sources/reference/stdlib/array.md @@ -19,7 +19,7 @@ Elements within the list can be any type. ### Examples -``` +```alloy > array.concat([]) [] @@ -35,22 +35,22 @@ Elements within the list can be any type. ## array.combine_maps -> **EXPERIMENTAL**: This is an [experimental][] feature. Experimental -> features are subject to frequent breaking changes, and may be removed with -> no equivalent replacement. The `stability.level` flag must be set to `experimental` -> to use the feature. +> **EXPERIMENTAL**: This is an [experimental][] feature. +> Experimental features are subject to frequent breaking changes, and may be removed with no equivalent replacement. +> The `stability.level` flag must be set to `experimental` to use the feature. The `array.combine_maps` function allows you to join two arrays of maps if certain keys have matching values in both maps. It's particularly useful when combining labels of targets coming from different `prometheus.discovery.*` or `prometheus.exporter.*` components. It takes three arguments: -* The first two arguments are a of type `list(map(string))`. The keys of the map are strings. +* The first two arguments are a of type `list(map(string))`. The keys of the map are strings. The value for each key could be of any Alloy type such as a `string`, `integer`, `map`, or a `capsule`. * The third input is an `array` containing strings. The strings are the keys whose value has to match for maps to be combined. The maps that don't contain all the keys provided in the third argument will be discarded. When maps are combined and both contain the same keys, the last value from the second argument will be used. Pseudo function code: -``` + +```text for every map in arg1: for every map in arg2: if the condition key matches in both: @@ -73,6 +73,7 @@ for every map in arg1: ``` Examples using discovery and exporter components: + ```alloy > array.combine_maps(discovery.kubernetes.k8s_pods.targets, prometheus.exporter.postgres, ["instance"]) @@ -82,4 +83,4 @@ Examples using discovery and exporter components: You can find more examples in the [tests][]. [tests]: https://github.com/grafana/alloy/blob/main/syntax/vm/vm_stdlib_test.go -[experimental]: https://grafana.com/docs/release-life-cycle/ \ No newline at end of file +[experimental]: https://grafana.com/docs/release-life-cycle/ diff --git a/docs/sources/reference/stdlib/coalesce.md b/docs/sources/reference/stdlib/coalesce.md index 071c6ad43a..18ce4f294c 100644 --- a/docs/sources/reference/stdlib/coalesce.md +++ b/docs/sources/reference/stdlib/coalesce.md @@ -12,7 +12,7 @@ If no argument is non-empty or non-zero, the last argument is returned. ## Examples -``` +```alloy > coalesce("a", "b") a > coalesce("", "b") diff --git a/docs/sources/reference/stdlib/constants.md b/docs/sources/reference/stdlib/constants.md index 94cbf42bba..36cf5e2dfa 100644 --- a/docs/sources/reference/stdlib/constants.md +++ b/docs/sources/reference/stdlib/constants.md @@ -14,7 +14,7 @@ The `constants` object exposes a list of constant values about the system {{< pa ## Examples -``` +```alloy > constants.hostname "my-hostname" diff --git a/docs/sources/reference/stdlib/convert.md b/docs/sources/reference/stdlib/convert.md index 32458a5483..97e6b5d07e 100644 --- a/docs/sources/reference/stdlib/convert.md +++ b/docs/sources/reference/stdlib/convert.md @@ -23,7 +23,7 @@ Strings resulting from calls to `convert.nonsensitive` are displayed in plain te ### Examples -``` +```alloy // Assuming `sensitive_value` is a secret: > sensitive_value diff --git a/docs/sources/reference/stdlib/encoding.md b/docs/sources/reference/stdlib/encoding.md index 0dbd68a197..7e3aa84d16 100644 --- a/docs/sources/reference/stdlib/encoding.md +++ b/docs/sources/reference/stdlib/encoding.md @@ -15,18 +15,52 @@ The `encoding` namespace contains encoding and decoding functions. ## encoding.from_base64 -The `encoding.from_base64` function decodes a RFC4648-compliant Base64-encoded string -into the original string. +The `encoding.from_base64` function decodes a RFC4648-compliant Base64-encoded string into the original string. -`encoding.from_base64` fails if the provided string argument contains invalid Base64 data. +`encoding.from_base64` fails if the provided string argument contains invalid Base64 data. -### Examples +### Example -``` +```text > encoding.from_base64("dGFuZ2VyaW5l") tangerine ``` +## encoding.from_URLbase64 + +The `encoding.from_URLbase64` function decodes a RFC4648-compliant Base64 URL safe encoded string into the original string. + +`encoding.from_URLbase64` fails if the provided string argument contains invalid Base64 data. + +### Example + +``` +> encoding.from_URLbase64("c3RyaW5nMTIzIT8kKiYoKSctPUB-") +string123!?$*&()'-=@~ +``` + +## encoding.to_base64 + +The `encoding.to_base64` function encodes the original string into RFC4648-compliant Base64 encoded string. + +### Example + +``` +> encoding.to_base64("string123!?$*&()'-=@~") +c3RyaW5nMTIzIT8kKiYoKSctPUB+ +``` + +## encoding.to_URLbase64 + +The `encoding.to_base64` function encodes the original string into RFC4648-compliant URL safe Base64 encoded string. + +### Example + +``` +> encoding.to_URLbase64("string123!?$*&()'-=@~") +c3RyaW5nMTIzIT8kKiYoKSctPUB- +``` + ## encoding.from_json The `encoding.from_json` function decodes a string representing JSON into an {{< param "PRODUCT_NAME" >}} value. @@ -42,7 +76,7 @@ For example, the JSON value `{"key": "value"}` is properly represented by the st ### Examples -``` +```alloy > encoding.from_json("15") 15 @@ -63,12 +97,10 @@ null ## encoding.from_yaml -The `encoding.from_yaml` function decodes a string representing YAML into an {{< param "PRODUCT_NAME" >}} -value. `encoding.from_yaml` fails if the string argument provided cannot be parsed as -YAML. +The `encoding.from_yaml` function decodes a string representing YAML into an {{< param "PRODUCT_NAME" >}} value. +`encoding.from_yaml` fails if the string argument provided can't be parsed as YAML. -A common use case of `encoding.from_yaml` is to decode the output of a -[`local.file`][] component to an {{< param "PRODUCT_NAME" >}} value. +A common use case of `encoding.from_yaml` is to decode the output of a [`local.file`][] component to an {{< param "PRODUCT_NAME" >}} value. {{< admonition type="note" >}} Remember to escape double quotes when passing YAML string literals to `encoding.from_yaml`. @@ -78,7 +110,7 @@ For example, the YAML value `key: "value"` is properly represented by the string ### Examples -``` +```alloy > encoding.from_yaml("15") 15 > encoding.from_yaml("[1, 2, 3]") diff --git a/docs/sources/reference/stdlib/file.md b/docs/sources/reference/stdlib/file.md index 5a8ae0efb0..57b8e90b77 100644 --- a/docs/sources/reference/stdlib/file.md +++ b/docs/sources/reference/stdlib/file.md @@ -15,7 +15,7 @@ The `file.path_join` function joins any number of path elements into a single pa ### Examples -``` +```alloy > file.path_join() "" diff --git a/docs/sources/reference/stdlib/json_path.md b/docs/sources/reference/stdlib/json_path.md index eeeaf96798..e642506c05 100644 --- a/docs/sources/reference/stdlib/json_path.md +++ b/docs/sources/reference/stdlib/json_path.md @@ -6,7 +6,7 @@ title: json_path # json_path -The `json_path` function lookup values using [jsonpath][] syntax. +The `json_path` function lookup values using [`jsonpath`][jsonpath] syntax. The function expects two strings. The first string is the JSON string used look up values. The second string is the JSONPath expression. @@ -20,7 +20,7 @@ A common use case of `json_path` is to decode and filter the output of a [`local ## Examples -``` +```alloy > json_path("{\"key\": \"value\"}", ".key") ["value"] diff --git a/docs/sources/reference/stdlib/string.md b/docs/sources/reference/stdlib/string.md index 1ea8ee3d60..0d1671299c 100644 --- a/docs/sources/reference/stdlib/string.md +++ b/docs/sources/reference/stdlib/string.md @@ -53,7 +53,7 @@ Subsequent calls without an explicit index will then proceed with `n`+1, `n`+2, The function produces an error if the format string requests an impossible conversion or accesses more arguments than are given. An error is also produced for an unsupported format verb. -##### Verbs +#### Verbs The specification may contain the following verbs. @@ -194,13 +194,13 @@ If the string doesn't start with the prefix, the string is returned unchanged. "hello" ``` -## strings.trim_space +## string.trim_space -`strings.trim_space` removes any whitespace characters from the start and end of a string. +`string.trim_space` removes any whitespace characters from the start and end of a string. ### Examples ```alloy -> strings.trim_space(" hello\n\n") +> string.trim_space(" hello\n\n") "hello" ``` \ No newline at end of file diff --git a/docs/sources/reference/stdlib/sys.md b/docs/sources/reference/stdlib/sys.md index 76043af372..f9b4226a34 100644 --- a/docs/sources/reference/stdlib/sys.md +++ b/docs/sources/reference/stdlib/sys.md @@ -14,11 +14,11 @@ The `sys` namespace contains functions related to the system. ## sys.env The `sys.env` function gets the value of an environment variable from the system {{< param "PRODUCT_NAME" >}} is running on. -If the environment variable does not exist, `sys.env` returns an empty string. +If the environment variable doesn't exist, `sys.env` returns an empty string. ### Examples -``` +```alloy > sys.env("HOME") "/home/alloy" diff --git a/docs/sources/set-up/install/binary.md b/docs/sources/set-up/install/binary.md index 077c874fd1..e6e479a53b 100644 --- a/docs/sources/set-up/install/binary.md +++ b/docs/sources/set-up/install/binary.md @@ -14,7 +14,7 @@ weight: 600 * Linux: AMD64, ARM64 * Windows: AMD64 -* macOS: AMD64 (Intel), ARM64 (Apple Silicon) +* macOS: AMD64 on Intel, ARM64 on Apple Silicon * FreeBSD: AMD64 ## Download {{% param "PRODUCT_NAME" %}} diff --git a/docs/sources/set-up/install/chef.md b/docs/sources/set-up/install/chef.md index 1c9ebbd94e..43b361a3d5 100644 --- a/docs/sources/set-up/install/chef.md +++ b/docs/sources/set-up/install/chef.md @@ -18,10 +18,9 @@ You can use Chef to install and manage {{< param "PRODUCT_NAME" >}}. - You can add the following resources to any recipe. - These tasks install {{< param "PRODUCT_NAME" >}} from the package repositories. The tasks target Linux systems from the following families: - - Debian (including Ubuntu) - - RedHat Enterprise Linux + - Debian, including Ubuntu + - RedHat Enterprise Linux, including Fedora - Amazon Linux - - Fedora ## Steps diff --git a/docs/sources/set-up/install/linux.md b/docs/sources/set-up/install/linux.md index c09fef53c5..6c7c9e0096 100644 --- a/docs/sources/set-up/install/linux.md +++ b/docs/sources/set-up/install/linux.md @@ -38,7 +38,7 @@ To install {{< param "PRODUCT_NAME" >}} on Linux, run the following commands in ```rhel-fedora wget -q -O gpg.key https://rpm.grafana.com/gpg.key sudo rpm --import gpg.key - echo -e '[grafana]\nname=grafana\nbaseurl=https://rpm.grafana.com\nrepo_gpgcheck=1\nenabled=1\ngpgcheck=1\ngpgkey=https://rpm.grafana.com/gpg.key\nsslverify=1 sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo + echo -e '[grafana]\nname=grafana\nbaseurl=https://rpm.grafana.com\nrepo_gpgcheck=1\nenabled=1\ngpgcheck=1\ngpgkey=https://rpm.grafana.com/gpg.key\nsslverify=1\nsslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee /etc/yum.repos.d/grafana.repo ``` ```suse-opensuse diff --git a/docs/sources/set-up/install/windows.md b/docs/sources/set-up/install/windows.md index 3a89417666..154922816c 100644 --- a/docs/sources/set-up/install/windows.md +++ b/docs/sources/set-up/install/windows.md @@ -74,8 +74,8 @@ This includes any configuration files in the installation directory. ## Next steps -- [Run {{< param "PRODUCT_NAME" >}}][Run] -- [Configure {{< param "PRODUCT_NAME" >}}][Configure] +* [Run {{< param "PRODUCT_NAME" >}}][Run] +* [Configure {{< param "PRODUCT_NAME" >}}][Configure] [latest]: https://github.com/grafana/alloy/releases/latest [data collection]: ../../../data-collection/ diff --git a/docs/sources/set-up/migrate/from-flow.md b/docs/sources/set-up/migrate/from-flow.md index 2ac443d271..bfd6bbcd3b 100644 --- a/docs/sources/set-up/migrate/from-flow.md +++ b/docs/sources/set-up/migrate/from-flow.md @@ -48,11 +48,11 @@ You can enable functionality in _Experimental_ and _Public preview_ by setting t Before migrating, modify your Grafana Agent Flow configuration to remove or replace any unsupported components: * The "classic modules" in Grafana Agent Flow have been removed in favor of the modules introduced in v0.40: - * `module.file` is replaced by the [import.file] configuration block. - * `module.git` is replaced by the [import.git] configuration block. - * `module.http` is replaced by the [import.http] configuration block. - * `module.string` is replaced by the [import.string] configuration block. -* `prometheus.exporter.vsphere` is replaced by the [otelcol.receiver.vcenter] component. + * `module.file` is replaced by the [`import.file`][import.file] configuration block. + * `module.git` is replaced by the [`import.git`][import.git] configuration block. + * `module.http` is replaced by the [`import.http`][import.http] configuration block. + * `module.string` is replaced by the [`import.string`][import.string] configuration block. +* `prometheus.exporter.vsphere` is replaced by the [`otelcol.receiver.vcenter`][otelcol.receiver.vcenter] component. [import.file]: ../../../reference/config-blocks/import.file/ [import.git]: ../../../reference/config-blocks/import.git/ @@ -101,7 +101,7 @@ Telemetry pipelines which receive data over the network (for example, pipelines Migrate remaining pipelines from Grafana Agent Flow to {{% param "PRODUCT_NAME" %}}: 1. Disable remaining pipelines in Grafana Agent Flow to prevent Flow and {{< param "PRODUCT_NAME" >}} from processing the same data. -2. Configure {{< param "PRODUCT_NAME" >}} with the remaining pipelines. +1. Configure {{< param "PRODUCT_NAME" >}} with the remaining pipelines. {{< admonition type="note" >}} This process results in minimal downtime as remaining pipelines are moved from Grafana Agent Flow to {{< param "PRODUCT_NAME" >}}. diff --git a/docs/sources/set-up/migrate/from-operator.md b/docs/sources/set-up/migrate/from-operator.md index a4227a2b8a..ddb6476e45 100644 --- a/docs/sources/set-up/migrate/from-operator.md +++ b/docs/sources/set-up/migrate/from-operator.md @@ -114,19 +114,19 @@ It then scrapes metrics from the targets and forward them to your remote write e You may need to customize this configuration further if you use additional features in your `MetricsInstance` resources. Refer to the documentation for the relevant components for additional information: -- [remote.kubernetes.secret][] -- [prometheus.remote_write][] -- [prometheus.operator.podmonitors][] -- [prometheus.operator.servicemonitors][] -- [prometheus.operator.probes][] -- [prometheus.scrape][] +- [`remote.kubernetes.secret`][remote.kubernetes.secret] +- [`prometheus.remote_write`][prometheus.remote_write] +- [`prometheus.operator.podmonitors`][prometheus.operator.podmonitors] +- [`prometheus.operator.servicemonitors`][prometheus.operator.servicemonitors] +- [`prometheus.operator.probes`][prometheus.operator.probes] +- [`prometheus.scrape`][prometheus.scrape] ## Collect logs The current recommendation is to create an additional DaemonSet deployment of {{< param "PRODUCT_NAME" >}} to scrape logs. > {{< param "PRODUCT_NAME" >}} has components that can scrape Pod logs directly from the Kubernetes API without needing a DaemonSet deployment. -> These are still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes][] and [loki.source.podlogs][]. +> These are still considered experimental, but if you would like to try them, see the documentation for [`loki.source.kubernetes`][loki.source.kubernetes] and [`loki.source.podlogs`][loki.source.podlogs]. These values are close to what Grafana Agent Operator deploys for logs: diff --git a/docs/sources/set-up/migrate/from-otelcol.md b/docs/sources/set-up/migrate/from-otelcol.md index 66144d885b..1d55789c6d 100644 --- a/docs/sources/set-up/migrate/from-otelcol.md +++ b/docs/sources/set-up/migrate/from-otelcol.md @@ -19,9 +19,9 @@ This topic describes how to: ## Components used in this topic -* [otelcol.receiver.otlp][] -* [otelcol.processor.memory_limiter][] -* [otelcol.exporter.otlp][] +* [`otelcol.receiver.otlp`][otelcol.receiver.otlp] +* [`otelcol.processor.memory_limiter`][otelcol.processor.memory_limiter] +* [`otelcol.exporter.otlp`][otelcol.exporter.otlp] ## Before you begin @@ -47,7 +47,7 @@ In this task, you use the [convert][] CLI command to output a {{< param "PRODUCT * _``_: The full path to the OpenTelemetry Collector configuration. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. -1. [Run][] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: +1. [Run][run_cli] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: ### Debugging @@ -98,7 +98,7 @@ This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your OpenT In this task, you use the [run][run_cli] CLI command to run {{< param "PRODUCT_NAME" >}} using an OpenTelemetry Collector configuration. -[Run][] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=otelcol`. +[Run][run_cli] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=otelcol`. Your configuration file must be a valid OpenTelemetry Collector configuration file rather than a {{< param "PRODUCT_NAME" >}} configuration file. ### Debug @@ -225,6 +225,5 @@ The following list is specific to the convert command and not {{< param "PRODUCT [Component Reference]: ../../../reference/components/ [convert]: ../../../reference/cli/convert/ [run_cli]: ../../../reference/cli/run/ -[Run]: ../../../get-started/run/ [DebuggingUI]: ../../../troubleshoot/debug/ [UI]: ../../../troubleshoot/debug/#alloy-ui diff --git a/docs/sources/set-up/migrate/from-prometheus.md b/docs/sources/set-up/migrate/from-prometheus.md index 718d4065d2..f06f5a1d29 100644 --- a/docs/sources/set-up/migrate/from-prometheus.md +++ b/docs/sources/set-up/migrate/from-prometheus.md @@ -19,8 +19,8 @@ This topic describes how to: ## Components used in this topic -* [prometheus.scrape][] -* [prometheus.remote_write][] +* [`prometheus.scrape`][prometheus.scrape] +* [`prometheus.remote_write`][prometheus.remote_write] ## Before you begin @@ -96,7 +96,7 @@ This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Prome In this task, you use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Prometheus configuration. -[Run][run alloy] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=prometheus`. +[Run][run] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=prometheus`. Your configuration file must be a valid Prometheus configuration file rather than an {{< param "PRODUCT_NAME" >}} configuration file. ### Debug @@ -206,7 +206,6 @@ The following list is specific to the convert command and not {{< param "PRODUCT [Components]: ../../../get-started/components/ [convert]: ../../../reference/cli/convert/ [run]: ../../../reference/cli/run/ -[run alloy]: ../../../set-up/run/ [DebuggingUI]: ../../../troubleshoot/debug/ [configuration]: ../../../get-started/configuration-syntax/ [UI]: ../../../troubleshoot/debug/#alloy-ui diff --git a/docs/sources/set-up/migrate/from-promtail.md b/docs/sources/set-up/migrate/from-promtail.md index 5119baf69e..67769f9cac 100644 --- a/docs/sources/set-up/migrate/from-promtail.md +++ b/docs/sources/set-up/migrate/from-promtail.md @@ -19,9 +19,9 @@ This topic describes how to: ## Components used in this topic -* [local.file_match][] -* [loki.source.file][] -* [loki.write][] +* [`local.file_match`][local.file_match] +* [`loki.source.file`][loki.source.file] +* [`loki.write`][loki.write] ## Before you begin @@ -30,7 +30,7 @@ This topic describes how to: ## Convert a Promtail configuration -To fully migrate from [Promtail] to {{< param "PRODUCT_NAME" >}}, you must convert your Promtail configuration into an {{< param "PRODUCT_NAME" >}} configuration. +To fully migrate from [Promtail][] to {{< param "PRODUCT_NAME" >}}, you must convert your Promtail configuration into an {{< param "PRODUCT_NAME" >}} configuration. This conversion allows you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. > In this task, you use the [convert][] CLI command to output an {{< param "PRODUCT_NAME" >}} @@ -46,7 +46,7 @@ This conversion allows you to take full advantage of the many additional feature * _``_: The full path to the Promtail configuration. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. -1. [Run][run alloy] {{< param "PRODUCT_NAME" >}} using the new configuration from _``_: +1. [Run][run] {{< param "PRODUCT_NAME" >}} using the new configuration from _``_: ### Debugging @@ -93,7 +93,7 @@ This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Promt > In this task, you use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Promtail configuration. -[Run][run alloy] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=promtail`. +[Run][run] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=promtail`. Your configuration file must be a valid Promtail configuration file rather than an {{< param "PRODUCT_NAME" >}} configuration file. ### Debug @@ -173,7 +173,7 @@ The following list is specific to the convert command and not {{< param "PRODUCT * Check if you are using any extra command line arguments with Promtail that aren't present in your configuration file. For example, `-max-line-size`. * Check if you are setting any environment variables, whether [expanded in the configuration file][] itself or consumed directly by Promtail, such as `JAEGER_AGENT_HOST`. * In {{< param "PRODUCT_NAME" >}}, the positions file is saved at a different location. - Refer to the [loki.source.file][] documentation for more details. + Refer to the [`loki.source.file`][loki.source.file] documentation for more details. Check if you have any setup, for example, a Kubernetes Persistent Volume, that you must update to use the new positions path. * Meta-monitoring metrics exposed by {{< param "PRODUCT_NAME" >}} usually match Promtail meta-monitoring metrics but uses a different name. Make sure that you use the new metric names, for example, in your alerts and dashboards queries. @@ -189,7 +189,6 @@ The following list is specific to the convert command and not {{< param "PRODUCT [Components]: ../../../get-started/components/ [convert]: ../../../reference/cli/convert/ [run]: ../../../reference/cli/run/ -[run alloy]: ../../../set-up/run/ [DebuggingUI]: ../../../troubleshoot/debug/ [configuration]: ../../../get-started/configuration-syntax/ [UI]: ../../../troubleshoot/debug/#alloy-ui diff --git a/docs/sources/set-up/migrate/from-static.md b/docs/sources/set-up/migrate/from-static.md index 4315412797..231c14e0d0 100644 --- a/docs/sources/set-up/migrate/from-static.md +++ b/docs/sources/set-up/migrate/from-static.md @@ -19,15 +19,15 @@ This topic describes how to: ## Components used in this topic -* [prometheus.scrape][] -* [prometheus.remote_write][] -* [local.file_match][] -* [loki.process][] -* [loki.source.file][] -* [loki.write][] -* [otelcol.receiver.otlp][] -* [otelcol.processor.batch][] -* [otelcol.exporter.otlp][] +* [`prometheus.scrape`][prometheus.scrape] +* [`prometheus.remote_write`][prometheus.remote_write] +* [`local.file_match`][local.file_match] +* [`loki.process`][loki.process] +* [`loki.source.file`][loki.source.file] +* [`loki.write`][loki.write] +* [`otelcol.receiver.otlp`][otelcol.receiver.otlp] +* [`otelcol.processor.batch`][otelcol.processor.batch] +* [`otelcol.exporter.otlp`][otelcol.exporter.otlp] ## Before you begin diff --git a/docs/sources/shared/reference/components/azuread-block.md b/docs/sources/shared/reference/components/azuread-block.md index 461402a5c9..dcb9a70188 100644 --- a/docs/sources/shared/reference/components/azuread-block.md +++ b/docs/sources/shared/reference/components/azuread-block.md @@ -9,6 +9,7 @@ Name | Type | Description | Default | Required `cloud` | `string` | The Azure Cloud. | `"AzurePublic"` | no The supported values for `cloud` are: + * `"AzurePublic"` * `"AzureChina"` * `"AzureGovernment"` diff --git a/docs/sources/shared/reference/components/extract-field-block.md b/docs/sources/shared/reference/components/extract-field-block.md index 98f1c03ad6..9789402e3e 100644 --- a/docs/sources/shared/reference/components/extract-field-block.md +++ b/docs/sources/shared/reference/components/extract-field-block.md @@ -15,6 +15,7 @@ Name | Type | Description `tag_name` | `string` | The name of the resource attribute added to logs, metrics, or spans. | `""` | no When you don't specify the `tag_name`, a default tag name is used with the format: + * `k8s.pod.annotations.` * `k8s.pod.labels.