diff --git a/docs/sources/_index.md b/docs/sources/_index.md index b2fa7218c4..9720396191 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -1,75 +1,63 @@ --- aliases: -- /docs/grafana-cloud/agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/ -- /docs/grafana-cloud/send-data/agent/ -canonical: https://grafana.com/docs/agent/latest/ -title: Grafana Agent -description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector +- /docs/alloy/ +canonical: https://grafana.com/docs/alloy/latest/ +title: Grafana Alloy +description: Grafana Alloy is a flexible, performant, vendor-neutral, telemetry collector weight: 350 cascade: - AGENT_RELEASE: v0.40.0 + ALLOY_RELEASE: v1.0.0 OTEL_VERSION: v0.87.0 -PRODUCT_NAME: Grafana Alloy - PRODUCT_ROOT_NAME: Grafana Alloy + PRODUCT_NAME: Grafana Alloy + PRODUCT_ROOT_NAME: Alloy --- -# Grafana Agent - -Grafana Agent is a vendor-neutral, batteries-included telemetry collector with -configuration inspired by [Terraform][]. It is designed to be flexible, -performant, and compatible with multiple ecosystems such as Prometheus and -OpenTelemetry. - -Grafana Agent is based around **components**. Components are wired together to -form programmable observability **pipelines** for telemetry collection, -processing, and delivery. +# {{% param "PRODUCT_NAME" %}} -{{< admonition type="note" >}} -This page focuses mainly on [Flow mode](https://grafana.com/docs/agent//flow/), the Terraform-inspired variant of Grafana Agent. +{{< param "PRODUCT_NAME" >}} is a vendor-neutral, batteries-included telemetry collector with configuration inspired by [Terraform][]. +It is designed to be flexible, performant, and compatible with multiple ecosystems such as Prometheus and OpenTelemetry. -For information on other variants of Grafana Agent, refer to [Introduction to Grafana Agent]({{< relref "./about.md" >}}). -{{< /admonition >}} +{{< param "PRODUCT_NAME" >}} is based around **components**. Components are wired together to form programmable observability **pipelines** for telemetry collection, processing, and delivery. -Grafana Agent can collect, transform, and send data to: +{{< param "PRODUCT_NAME" >}} can collect, transform, and send data to: * The [Prometheus][] ecosystem * The [OpenTelemetry][] ecosystem * The Grafana open source ecosystem ([Loki][], [Grafana][], [Tempo][], [Mimir][], [Pyroscope][]) -[Terraform]: https://terraform.io -[Prometheus]: https://prometheus.io -[OpenTelemetry]: https://opentelemetry.io -[Loki]: https://github.com/grafana/loki -[Grafana]: https://github.com/grafana/grafana -[Tempo]: https://github.com/grafana/tempo -[Mimir]: https://github.com/grafana/mimir -[Pyroscope]: https://github.com/grafana/pyroscope +## Why use {{< param "PRODUCT_NAME" >}}? -## Why use Grafana Agent? - -* **Vendor-neutral**: Fully compatible with the Prometheus, OpenTelemetry, and - Grafana open source ecosystems. -* **Every signal**: Collect telemetry data for metrics, logs, traces, and - continuous profiles. -* **Scalable**: Deploy on any number of machines to collect millions of active - series and terabytes of logs. -* **Battle-tested**: Grafana Agent extends the existing battle-tested code from - the Prometheus and OpenTelemetry Collector projects. -* **Powerful**: Write programmable pipelines with ease, and debug them using a - [built-in UI][UI]. -* **Batteries included**: Integrate with systems like MySQL, Kubernetes, and - Apache to get telemetry that's immediately useful. +* **Vendor-neutral**: Fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems. +* **Every signal**: Collect telemetry data for metrics, logs, traces, and continuous profiles. +* **Scalable**: Deploy on any number of machines to collect millions of active series and terabytes of logs. +* **Battle-tested**: {{< param "PRODUCT_NAME" >}} extends the existing battle-tested code from the Prometheus and OpenTelemetry Collector projects. +* **Powerful**: Write programmable pipelines with ease, and debug them using a [built-in UI][UI]. +* **Batteries included**: Integrate with systems like MySQL, Kubernetes, and Apache to get telemetry that's immediately useful. + + ## Supported platforms * Linux @@ -94,112 +82,19 @@ Grafana Agent can collect, transform, and send data to: ## Release cadence -A new minor release is planned every six weeks for the entire Grafana Agent -project, including Static mode, the Static mode Kubernetes operator, and Flow -mode. +A new minor release is planned every six weeks for the entire {{< param "PRODUCT_NAME" >}}. -The release cadence is best-effort: releases may be moved forwards or backwards -if needed. The planned release dates for future minor releases do not change if -one minor release is moved. +The release cadence is best-effort: releases may be moved forwards or backwards if needed. +The planned release dates for future minor releases do not change if one minor release is moved. Patch and security releases may be created at any time. -{{% docs/reference %}} -[variants]: "/docs/agent/ -> /docs/agent//about" -[variants]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/about" - -[Static mode]: "/docs/agent/ -> /docs/agent//static" -[Static mode]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static" - -[Static mode Kubernetes operator]: "/docs/agent/ -> /docs/agent//operator" -[Static mode Kubernetes operator]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/operator" - -[Flow mode]: "/docs/agent/ -> /docs/agent//flow" -[Flow mode]: "/docs/grafana-cloud/ -> /docs/agent//flow" - -[UI]: "/docs/agent/ -> /docs/agent//flow/tasks/debug.md#grafana-agent-flow-ui" -[UI]: "/docs/grafana-cloud/ -> /docs/agent//flow/tasks/debug.md#grafana-agent-flow-ui" -{{% /docs/reference %}} - -# {{% param "PRODUCT_NAME" %}} - -{{< param "PRODUCT_NAME" >}} is a _component-based_ revision of {{< param "PRODUCT_ROOT_NAME" >}} with a focus on ease-of-use, -debuggability, and ability to adapt to the needs of power users. - -Components allow for reusability, composability, and focus on a single task. - -* **Reusability** allows for the output of components to be reused as the input for multiple other components. -* **Composability** allows for components to be chained together to form a pipeline. -* **Single task** means the scope of a component is limited to one narrow task and thus has fewer side effects. - -## Features - -* Write declarative configurations with a Terraform-inspired configuration - language. -* Declare components to configure parts of a pipeline. -* Use expressions to bind components together to build a programmable pipeline. -* Includes a UI for debugging the state of a pipeline. - -## Example - -```river -// Discover Kubernetes pods to collect metrics from -discovery.kubernetes "pods" { - role = "pod" -} - -// Scrape metrics from Kubernetes pods and send to a prometheus.remote_write -// component. -prometheus.scrape "default" { - targets = discovery.kubernetes.pods.targets - forward_to = [prometheus.remote_write.default.receiver] -} - -// Get an API key from disk. -local.file "apikey" { - filename = "/var/data/my-api-key.txt" - is_secret = true -} - -// Collect and send metrics to a Prometheus remote_write endpoint. -prometheus.remote_write "default" { - endpoint { - url = "http://localhost:9009/api/prom/push" - - basic_auth { - username = "MY_USERNAME" - password = local.file.apikey.content - } - } -} -``` - - -## {{% param "PRODUCT_NAME" %}} configuration generator - -The {{< param "PRODUCT_NAME" >}} [configuration generator](https://grafana.github.io/agent-configurator/) helps you get a head start on creating flow code. - -{{< admonition type="note" >}} -This feature is experimental, and it doesn't support all River components. -{{< /admonition >}} - -## Next steps - -* [Install][] {{< param "PRODUCT_NAME" >}}. -* Learn about the core [Concepts][] of {{< param "PRODUCT_NAME" >}}. -* Follow the [Tutorials][] for hands-on learning of {{< param "PRODUCT_NAME" >}}. -* Consult the [Tasks][] instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}. -* Check out the [Reference][] documentation to find specific information you might be looking for. - -{{% docs/reference %}} -[Install]: "/docs/agent/ -> /docs/agent//flow/get-started/install/" -[Install]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/get-started/install/" -[Concepts]: "/docs/agent/ -> /docs/agent//flow/concepts/" -[Concepts]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/concepts/" -[Tasks]: "/docs/agent/ -> /docs/agent//flow/tasks/" -[Tasks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tasks/" -[Tutorials]: "/docs/agent/ -> /docs/agent//flow/tutorials/" -[Tutorials]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tutorials/ -[Reference]: "/docs/agent/ -> /docs/agent//flow/reference/" -[Reference]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/ -{{% /docs/reference %}} +[Terraform]: https://terraform.io +[Prometheus]: https://prometheus.io +[OpenTelemetry]: https://opentelemetry.io +[Loki]: https://github.com/grafana/loki +[Grafana]: https://github.com/grafana/grafana +[Tempo]: https://github.com/grafana/tempo +[Mimir]: https://github.com/grafana/mimir +[Pyroscope]: https://github.com/grafana/pyroscope +[UI]: ./tasks/debug/#grafana-agent-flow-ui diff --git a/docs/sources/_index.md.t b/docs/sources/_index.md.t index f54a9becae..12db322ad7 100644 --- a/docs/sources/_index.md.t +++ b/docs/sources/_index.md.t @@ -1,36 +1,35 @@ --- aliases: -- /docs/grafana-cloud/agent/ -- /docs/grafana-cloud/monitor-infrastructure/agent/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/ -- /docs/grafana-cloud/send-data/agent/ -canonical: https://grafana.com/docs/agent/latest/ -title: Grafana Agent -description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector +- /docs/alloy/ +canonical: https://grafana.com/docs/alloy/latest/ +title: Grafana Alloy +description: Grafana Alloy is a flexible, performant, vendor-neutral, telemetry collector weight: 350 cascade: - AGENT_RELEASE: $AGENT_VERSION + ALLOY_RELEASE: $ALLOY_VERSION OTEL_VERSION: v0.87.0 + PRODUCT_NAME: Grafana Alloy + PRODUCT_ROOT_NAME: Alloy --- -# Grafana Agent +# {{% param "PRODUCT_NAME" %}} -Grafana Agent is a vendor-neutral, batteries-included telemetry collector with +{{< param "PRODUCT_NAME" >}} is a vendor-neutral, batteries-included telemetry collector with configuration inspired by [Terraform][]. It is designed to be flexible, performant, and compatible with multiple ecosystems such as Prometheus and OpenTelemetry. -Grafana Agent is based around **components**. Components are wired together to +{{< param "PRODUCT_NAME" >}} is based around **components**. Components are wired together to form programmable observability **pipelines** for telemetry collection, processing, and delivery. {{< admonition type="note" >}} -This page focuses mainly on [Flow mode](https://grafana.com/docs/agent//flow/), the Terraform-inspired variant of Grafana Agent. +This page focuses mainly on [Flow mode](https://grafana.com/docs/alloy//), the Terraform-inspired variant of {{< param "PRODUCT_NAME" >}}. -For information on other variants of Grafana Agent, refer to [Introduction to Grafana Agent]({{< relref "./about.md" >}}). +For information on other variants of {{< param "PRODUCT_NAME" >}}, refer to [Introduction to {{< param "PRODUCT_NAME" >}}]({{< relref "./about.md" >}}). {{< /admonition >}} -Grafana Agent can collect, transform, and send data to: +{{< param "PRODUCT_NAME" >}} can collect, transform, and send data to: * The [Prometheus][] ecosystem * The [OpenTelemetry][] ecosystem @@ -45,7 +44,7 @@ Grafana Agent can collect, transform, and send data to: [Mimir]: https://github.com/grafana/mimir [Pyroscope]: https://github.com/grafana/pyroscope -## Why use Grafana Agent? +## Why use {{% param "PRODUCT_NAME" %}}? * **Vendor-neutral**: Fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems. @@ -53,7 +52,7 @@ Grafana Agent can collect, transform, and send data to: continuous profiles. * **Scalable**: Deploy on any number of machines to collect millions of active series and terabytes of logs. -* **Battle-tested**: Grafana Agent extends the existing battle-tested code from +* **Battle-tested**: {{< param "PRODUCT_NAME" >}} extends the existing battle-tested code from the Prometheus and OpenTelemetry Collector projects. * **Powerful**: Write programmable pipelines with ease, and debug them using a [built-in UI][UI]. @@ -62,7 +61,7 @@ Grafana Agent can collect, transform, and send data to: ## Getting started -* Choose a [variant][variants] of Grafana Agent to run. +* Choose a [variant][variants] of {{< param "PRODUCT_NAME" >}} to run. * Refer to the documentation for the variant to use: * [Static mode][] * [Static mode Kubernetes operator][] @@ -92,7 +91,7 @@ Grafana Agent can collect, transform, and send data to: ## Release cadence -A new minor release is planned every six weeks for the entire Grafana Agent +A new minor release is planned every six weeks for the entire {{< param "PRODUCT_NAME" >}} project, including Static mode, the Static mode Kubernetes operator, and Flow mode. @@ -103,20 +102,20 @@ one minor release is moved. Patch and security releases may be created at any time. {{% docs/reference %}} -[variants]: "/docs/agent/ -> /docs/agent//about" +[variants]: "/docs/alloy/ -> /docs/alloy//about" [variants]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/about" -[Static mode]: "/docs/agent/ -> /docs/agent//static" +[Static mode]: "/docs/alloy/ -> /docs/alloy//static" [Static mode]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static" -[Static mode Kubernetes operator]: "/docs/agent/ -> /docs/agent//operator" +[Static mode Kubernetes operator]: "/docs/alloy/ -> /docs/alloy//operator" [Static mode Kubernetes operator]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/operator" -[Flow mode]: "/docs/agent/ -> /docs/agent//flow" -[Flow mode]: "/docs/grafana-cloud/ -> /docs/agent//flow" +[Flow mode]: "/docs/alloy/ -> /docs/alloy//flow" +[Flow mode]: "/docs/grafana-cloud/ -> /docs/alloy//flow" -[UI]: "/docs/agent/ -> /docs/agent//flow/tasks/debug.md#grafana-agent-flow-ui" -[UI]: "/docs/grafana-cloud/ -> /docs/agent//flow/tasks/debug.md#grafana-agent-flow-ui" +[UI]: "/docs/alloy/ -> /docs/alloy//tasks/debug.md#grafana-agent-flow-ui" +[UI]: "/docs/grafana-cloud/ -> /docs/alloy//tasks/debug.md#grafana-agent-flow-ui" {{% /docs/reference %}} # {{% param "PRODUCT_NAME" %}} @@ -190,14 +189,14 @@ This feature is experimental, and it doesn't support all River components. * Check out the [Reference][] documentation to find specific information you might be looking for. {{% docs/reference %}} -[Install]: "/docs/agent/ -> /docs/agent//flow/get-started/install/" -[Install]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/get-started/install/" -[Concepts]: "/docs/agent/ -> /docs/agent//flow/concepts/" -[Concepts]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/concepts/" -[Tasks]: "/docs/agent/ -> /docs/agent//flow/tasks/" -[Tasks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tasks/" -[Tutorials]: "/docs/agent/ -> /docs/agent//flow/tutorials/" -[Tutorials]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/tutorials/ -[Reference]: "/docs/agent/ -> /docs/agent//flow/reference/" -[Reference]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/flow/reference/ +[Install]: "/docs/alloy/ -> /docs/alloy//get-started/install/" +[Install]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/get-started/install/" +[Concepts]: "/docs/alloy/ -> /docs/alloy//concepts/" +[Concepts]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/concepts/" +[Tasks]: "/docs/alloy/ -> /docs/alloy//tasks/" +[Tasks]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/tasks/" +[Tutorials]: "/docs/alloy/ -> /docs/alloy//tutorials/" +[Tutorials]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/tutorials/ +[Reference]: "/docs/alloy/ -> /docs/alloy//reference/" +[Reference]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/reference/ {{% /docs/reference %}} diff --git a/docs/sources/about.md b/docs/sources/about.md index eca262408d..1501223dd3 100644 --- a/docs/sources/about.md +++ b/docs/sources/about.md @@ -1,53 +1,68 @@ --- aliases: -- ./about-agent/ -- /docs/grafana-cloud/agent/about/ -- /docs/grafana-cloud/monitor-infrastructure/agent/about/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/about/ -- /docs/grafana-cloud/send-data/agent/about/ -canonical: https://grafana.com/docs/agent/latest/about/ -description: Grafana Agent is a flexible, performant, vendor-neutral, telemetry collector +- ./about/ +canonical: https://grafana.com/docs/alloy/latest/about/ +description: Grafana Alloy is a flexible, performant, vendor-neutral, telemetry collector menuTitle: Introduction -title: Introduction to Grafana Agent -weight: 100 +title: Introduction to Grafana Alloy +weight: 10 --- -# Introduction to Grafana Agent +# Introduction to {{% param "PRODUCT_NAME" %}} -Grafana Agent is a flexible, high performance, vendor-neutral telemetry collector. It's fully compatible with the most popular open source observability standards such as OpenTelemetry (OTel) and Prometheus. +{{< param "PRODUCT_NAME" >}} is a flexible, high performance, vendor-neutral telemetry collector. It's fully compatible with the most popular open source observability standards such as OpenTelemetry (OTel) and Prometheus. -Grafana Agent is available in three different variants: +{{< param "PRODUCT_NAME" >}} is a _component-based_ revision of {{< param "PRODUCT_ROOT_NAME" >}} with a focus on ease-of-use, +debuggability, and ability to adapt to the needs of power users. -- [Static mode][]: The original Grafana Agent. -- [Static mode Kubernetes operator][]: The Kubernetes operator for Static mode. -- [Flow mode][]: The new, component-based Grafana Agent. +Components allow for reusability, composability, and focus on a single task. -{{% docs/reference %}} -[Static mode]: "/docs/agent/ -> /docs/agent//static" -[Static mode]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static" -[Static mode Kubernetes operator]: "/docs/agent/ -> /docs/agent//operator" -[Static mode Kubernetes operator]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/operator" -[Flow mode]: "/docs/agent/ -> /docs/agent//flow" -[Flow mode]: "/docs/grafana-cloud/ -> /docs/agent//flow" -[Prometheus]: "/docs/agent/ -> /docs/agent//flow/tasks/collect-prometheus-metrics.md" -[Prometheus]: "/docs/grafana-cloud/ -> /docs/agent//flow/tasks/collect-prometheus-metrics.md" -[OTel]: "/docs/agent/ -> /docs/agent//flow/tasks/collect-opentelemetry-data.md" -[OTel]: "/docs/grafana-cloud/ -> /docs/agent//flow/tasks/collect-opentelemetry-data.md" -[Loki]: "/docs/agent/ -> /docs/agent//flow/tasks/migrate/from-promtail.md" -[Loki]: "/docs/grafana-cloud/ -> /docs/agent//flow/tasks/migrate/from-promtail.md" -[clustering]: "/docs/agent/ -> /docs/agent//flow/concepts/clustering/_index.md" -[clustering]: "/docs/grafana-cloud/ -> /docs/agent//flow/concepts/clustering/_index.md" -[rules]: "/docs/agent/ -> /docs/agent/latest/flow/reference/components/mimir.rules.kubernetes.md" -[rules]: "/docs/grafana-cloud/ -> /docs/agent/latest/flow/reference/components/mimir.rules.kubernetes.md" -[vault]: "/docs/agent/ -> /docs/agent//flow/reference/components/remote.vault.md" -[vault]: "/docs/grafana-cloud/ -> /docs/agent//flow/reference/components/remote.vault.md" -{{% /docs/reference %}} +* **Reusability** allows for the output of components to be reused as the input for multiple other components. +* **Composability** allows for components to be chained together to form a pipeline. +* **Single task** means the scope of a component is limited to one narrow task and thus has fewer side effects. -[Pyroscope]: https://grafana.com/docs/pyroscope/latest/configure-client/grafana-agent/go_pull -[helm chart]: https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/config-k8s-helmchart -[sla]: https://grafana.com/legal/grafana-cloud-sla -[observability]: https://grafana.com/docs/grafana-cloud/monitor-applications/application-observability/setup#send-telemetry +## Features + +* Write declarative configurations with a Terraform-inspired configuration language. +* Declare components to configure parts of a pipeline. +* Use expressions to bind components together to build a programmable pipeline. +* Includes a UI for debugging the state of a pipeline. + +## Example +```river +// Discover Kubernetes pods to collect metrics from +discovery.kubernetes "pods" { + role = "pod" +} + +// Scrape metrics from Kubernetes pods and send to a prometheus.remote_write +// component. +prometheus.scrape "default" { + targets = discovery.kubernetes.pods.targets + forward_to = [prometheus.remote_write.default.receiver] +} + +// Get an API key from disk. +local.file "apikey" { + filename = "/var/data/my-api-key.txt" + is_secret = true +} + +// Collect and send metrics to a Prometheus remote_write endpoint. +prometheus.remote_write "default" { + endpoint { + url = "http://localhost:9009/api/prom/push" + + basic_auth { + username = "MY_USERNAME" + password = local.file.apikey.content + } + } +} +``` + + + +## {{% param "PRODUCT_NAME" %}} configuration generator + +The {{< param "PRODUCT_NAME" >}} [configuration generator][] helps you get a head start on creating flow code. + +{{< admonition type="note" >}} +This feature is experimental, and it doesn't support all River components. +{{< /admonition >}} + +## Next steps + +* [Install][] {{< param "PRODUCT_NAME" >}}. +* Learn about the core [Concepts][] of {{< param "PRODUCT_NAME" >}}. +* Follow the [Tutorials][] for hands-on learning of {{< param "PRODUCT_NAME" >}}. +* Consult the [Tasks][] instructions to accomplish common objectives with {{< param "PRODUCT_NAME" >}}. +* Check out the [Reference][] documentation to find specific information you might be looking for. -## Choose which variant of Grafana Agent to run +[configuration generator]: https://grafana.github.io/agent-configurator/ +[Install]: ../get-started/install/ +[Concepts]: ../concepts/ +[Tasks]: ../tasks/ +[Tutorials]: ../tutorials/ +[Reference]: ../reference/ + + +### BoringCrypto + +[BoringCrypto][] is an **EXPERIMENTAL** feature for building {{< param "PRODUCT_NAME" >}} +binaries and images with BoringCrypto enabled. Builds and Docker images for Linux arm64/amd64 are made available. + +[BoringCrypto]: https://pkg.go.dev/crypto/internal/boring diff --git a/docs/sources/data-collection.md b/docs/sources/data-collection.md index 80fbd874cd..e90d9e63c0 100644 --- a/docs/sources/data-collection.md +++ b/docs/sources/data-collection.md @@ -1,36 +1,30 @@ --- aliases: - ./data-collection/ -- /docs/grafana-cloud/agent/data-collection/ -- /docs/grafana-cloud/monitor-infrastructure/agent/data-collection/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/data-collection/ -- /docs/grafana-cloud/send-data/agent/data-collection/ -canonical: https://grafana.com/docs/agent/latest/data-collection/ -description: Grafana Agent data collection +canonical: https://grafana.com/docs/latest/data-collection/ +description: Grafana Alloy data collection menuTitle: Data collection -title: Grafana Agent data collection -weight: 500 +title: Grafana Alloy data collection +weight: 900 --- -# Grafana Agent Data collection +# {{% param "PRODUCT_NAME" %}} Data collection -By default, Grafana Agent sends anonymous but uniquely identifiable usage information from -your Grafana Agent instance to Grafana Labs. These statistics are sent to `stats.grafana.org`. +By default, {{< param "PRODUCT_NAME" >}} sends anonymous but uniquely identifiable usage information from your {{< param "PRODUCT_NAME" >}} instance to Grafana Labs. +These statistics are sent to `stats.grafana.org`. -Statistics help us better understand how Grafana Agent is used. This helps us prioritize features and documentation. +Statistics help us better understand how {{< param "PRODUCT_NAME" >}} is used. This helps us prioritize features and documentation. The usage information includes the following details: * A randomly generated, anonymous unique ID (UUID). * Timestamp of when the UID was first generated. * Timestamp of when the report was created (by default, every four hours). -* Version of running Grafana Agent. -* Operating system Grafana Agent is running on. -* System architecture Grafana Agent is running on. -* List of enabled feature flags ([Static] mode only). -* List of enabled integrations ([Static] mode only). -* List of enabled [components][] ([Flow] mode only). -* Method used to deploy Grafana Agent, for example Docker, Helm, RPM, or Operator. +* Version of running {{< param "PRODUCT_NAME" >}}. +* Operating system {{< param "PRODUCT_NAME" >}} is running on. +* System architecture {{< param "PRODUCT_NAME" >}} is running on. +* List of enabled [components][] +* Method used to deploy {{< param "PRODUCT_NAME" >}}, for example Docker, Helm, RPM, or Operator. This list may change over time. All newly reported data is documented in the CHANGELOG. @@ -38,13 +32,5 @@ This list may change over time. All newly reported data is documented in the CHA You can use the `-disable-reporting` [command line flag][] to disable the reporting and opt-out of the data collection. -{{% docs/reference %}} -[command line flag]: "/docs/agent/ -> /docs/agent//flow/reference/cli/run.md" -[command line flag]: "/docs/grafana-cloud/ -> /docs/agent//flow/reference/cli/run.md" -[components]: "/docs/agent/ -> /docs/agent//flow/concepts/components.md" -[components]: "/docs/grafana-cloud/ -> /docs/agent//flow/reference/cli/run.md" -[Static]: "/docs/agent/ -> /docs/agent//static" -[Static]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/send-data/agent/static -[Flow]: "/docs/agent/ -> /docs/agent//flow" -[Flow]: "/docs/grafana-cloud/ -> /docs/agent//flow" -{{% /docs/reference %}} \ No newline at end of file +[components]: ../concepts/components +[command line flag]: ../reference/cli/run diff --git a/docs/sources/release-notes.md b/docs/sources/release-notes.md index 12d1578685..6491ec2e47 100644 --- a/docs/sources/release-notes.md +++ b/docs/sources/release-notes.md @@ -1,14 +1,10 @@ --- aliases: -- ./upgrade-guide/ -- /docs/grafana-cloud/agent/flow/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/agent/flow/release-notes/ -- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/release-notes/ -- /docs/grafana-cloud/send-data/agent/flow/release-notes/ -canonical: https://grafana.com/docs/agent/latest/flow/release-notes/ -description: Release notes for Grafana Agent Flow +- ./release-notes/ +canonical: https://grafana.com/docs/agent/latest/release-notes/ +description: Release notes for Grafana Alloy menuTitle: Release notes -title: Release notes for Grafana Agent Flow +title: Release notes for Grafana Alloy weight: 999 --- @@ -16,619 +12,6 @@ weight: 999 The release notes provide information about deprecations and breaking changes in {{< param "PRODUCT_NAME" >}}. -For a complete list of changes to {{< param "PRODUCT_ROOT_NAME" >}}, with links to pull requests and related issues when available, refer to the [Changelog](https://github.com/grafana/agent/blob/main/CHANGELOG.md). +For a complete list of changes to {{< param "PRODUCT_ROOT_NAME" >}}, with links to pull requests and related issues when available, refer to the [Changelog][]. -{{< admonition type="note" >}} -These release notes are specific to {{< param "PRODUCT_NAME" >}}. -Other release notes for the different {{< param "PRODUCT_ROOT_NAME" >}} variants are contained on separate pages: - -* [Static mode release notes][release-notes-static] -* [Static mode Kubernetes operator release notes][release-notes-operator] - -[release-notes-static]: {{< relref "../static/release-notes.md" >}} -[release-notes-operator]: {{< relref "../operator/release-notes.md" >}} -{{< /admonition >}} - -## v0.40 - -### Breaking change: Prohibit the configuration of services within modules. - -Previously it was possible to configure the HTTP service via the [HTTP config block](https://grafana.com/docs/agent/v0.39/flow/reference/config-blocks/http/) inside of a module. -This functionality is now only available in the main configuration. - -### Breaking change: Change the default value of `disable_high_cardinality_metrics` to `true`. - -The `disable_high_cardinality_metrics` configuration argument is used by `otelcol.exporter` components such as `otelcol.exporter.otlp`. -If you need to see high cardinality metrics containing labels such as IP addresses and port numbers, you now have to explicitly set `disable_high_cardinality_metrics` to `false`. - -### Breaking change: Rename component `prometheus.exporter.agent` to `prometheus.exporter.self` - -The name `prometheus.exporter.agent` is potentially ambiguous and can be misinterpreted as an exporter for Prometheus Agent. -The new name reflects the component's true purpose as an exporter of the process's own metrics. - -## v0.39 - -### Breaking change: `otelcol.receiver.prometheus` will drop all `otel_scope_info` metrics when converting them to OTLP - -* If the `otel_scope_info` metric has the `otel_scope_name` and `otel_scope_version` labels, - their values are used to set the OTLP Instrumentation Scope name and version, respectively. -* Labels for `otel_scope_info` metrics other than `otel_scope_name` and `otel_scope_version` - are added as scope attributes with the matching name and version. - -### Breaking change: label for `target` block in `prometheus.exporter.blackbox` is removed - -Previously in `prometheus.exporter.blackbox`, the `target` block requires a label which is used in job's name. -In this version, user needs to be specify `name` attribute instead, which allow less restrictive naming. - -Old configuration example: - -```river -prometheus.exporter.blackbox "example" { - config_file = "blackbox_modules.yml" - - target "grafana" { - address = "http://grafana.com" - module = "http_2xx" - labels = { - "env": "dev", - } - } -} -``` - -New configuration example: - -```river -prometheus.exporter.blackbox "example" { - config_file = "blackbox_modules.yml" - - target { - name = "grafana" - address = "http://grafana.com" - module = "http_2xx" - labels = { - "env": "dev", - } - } -} -``` - -## v0.38 - -### Breaking change: `otelcol.exporter.jaeger` component removed - -The deprecated `otelcol.exporter.jaeger` component has been removed. To send -traces to Jaeger, use `otelcol.exporter.otlp` and a version of Jaeger that -supports OTLP. - -## v0.37 - -### Breaking change: Renamed `non_indexed_labels` Loki processing stage to `structured_metadata`. - -If you use the Loki processing stage in your {{< param "PRODUCT_NAME" >}} configuration, you must rename the `non_indexed_labels` pipeline stage definition to `structured_metadata`. - -Old configuration example: - -```river -stage.non_indexed_labels { - values = {"app" = ""} -} -``` - -New configuration example: -```river -stage.structured_metadata { - values = {"app" = ""} -} -``` - -### Breaking change: `otelcol.exporter.prometheus` scope labels updated - -There are 2 changes to the way scope labels work for this component. - -* Previously, the `include_scope_info` argument would trigger including -`otel_scope_name` and `otel_scope_version` in metrics. This is now defaulted -to `true` and controlled via the `include_scope_labels` argument. - -* A bugfix was made to rename `otel_scope_info` metric labels from -`name` to `otel_scope_name` and `version` to `otel_scope_version`. This is -now correct with the OTLP Instrumentation Scope specification. - -### Breaking change: `prometheus.exporter.unix` now requires a label. - -Previously the exporter was a singleton and did not require a label. The exporter now can be used multiple times and -needs a label. - -Old configuration example: - -```river -prometheus.exporter.unix { /* ... */ } -``` - -New configuration example: - -```river -prometheus.exporter.unix "example" { /* ... */ } -``` - -## v0.36 - -### Breaking change: The default value of `retry_on_http_429` is changed to `true` for the `queue_config` in `prometheus.remote_write` - -The default value of `retry_on_http_429` is changed from `false` to `true` for the `queue_config` block in `prometheus.remote_write` -so that {{< param "PRODUCT_ROOT_NAME" >}} can retry sending and avoid data being lost for metric pipelines by default. - -* If you set the `retry_on_http_429` explicitly - no action is required. -* If you do not set `retry_on_http_429` explicitly and you do *not* want to retry on HTTP 429, make sure you set it to `false` as you upgrade to this new version. - -### Breaking change: `loki.source.file` no longer automatically extracts logs from compressed files - -`loki.source.file` component will no longer automatically detect and decompress -logs from compressed files (this was an undocumented behaviour). - -This file-extension-based detection of compressed files has been replaced by a -new configuration block that explicitly enables and specifies the compression -format. By default, the decompression of files is entirely disabled. - -How to migrate: - -* If {{< param "PRODUCT_NAME" >}} never reads logs from files with - extensions `.gz`, `.tar.gz`, `.z` or `.bz2` then no action is required. - > You can check what are the file extensions {{< param "PRODUCT_NAME" >}} reads from by looking - at the `path` label on `loki_source_file_file_bytes_total` metric. - -* If {{< param "PRODUCT_NAME" >}} extracts data from compressed files, please add the following - configuration block to your `loki.source.file` component: - - ```river - loki.source.file "example" { - ... - decompression { - enabled = true - format = "" - } - } - ``` - - where the `` is the appropriate compression format - - see [`loki.source.file` documentation][loki-source-file-docs] for details. - - [loki-source-file-docs]: {{< relref "./reference/components/loki.source.file.md" >}} - -## v0.35 - -### Breaking change: `auth` and `version` attributes from `walk_params` block of `prometheus.exporter.snmp` have been removed - -The `prometheus.exporter.snmp` flow component wraps a new version of SNMP exporter which introduces a new configuration file format. -This new format separates the walk and metric mappings from the connection and authentication settings. This allows for easier configuration of different -auth params without having to duplicate the full walk and metric mapping. - -Old configuration example: - -```river -prometheus.exporter.snmp "example" { - config_file = "snmp_modules.yml" - - target "network_switch_1" { - address = "192.168.1.2" - module = "if_mib" - walk_params = "public" - } - - walk_param "public" { - retries = "2" - version = "2" - auth { - community = "public" - } - } -} -``` - -New configuration example: - -```river -prometheus.exporter.snmp "example" { - config_file = "snmp_modules.yml" - - target "network_switch_1" { - address = "192.168.1.2" - module = "if_mib" - walk_params = "public" - auth = "public_v2" - } - - walk_param "public" { - retries = "2" - } -} -``` - -See [Module and Auth Split Migration](https://github.com/prometheus/snmp_exporter/blob/main/auth-split-migration.md) for more details. - -### Breaking change: `discovery.file` has been renamed to `local.file_match` - -The `discovery.file` component has been renamed to `local.file_match` to make -its purpose more clear: to find files on the local filesystem matching a -pattern. - -Renaming `discovery.file` to `local.file_match` also resolves a point of -confusion where `discovery.file` was thought to implement Prometheus' file -service discovery. - -Old configuration example: - -```river -discovery.kubernetes "k8s" { - role = "pod" -} - -discovery.relabel "k8s" { - targets = discovery.kubernetes.k8s.targets - - rule { - source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_label_name"] - target_label = "job" - separator = "/" - } - - rule { - source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"] - target_label = "__path__" - separator = "/" - replacement = "/var/log/pods/*$1/*.log" - } -} - -discovery.file "pods" { - path_targets = discovery.relabel.k8s.output -} -``` - -New configuration example: - -```river -discovery.kubernetes "k8s" { - role = "pod" -} - -discovery.relabel "k8s" { - targets = discovery.kubernetes.k8s.targets - - rule { - source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_label_name"] - target_label = "job" - separator = "/" - } - - rule { - source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"] - target_label = "__path__" - separator = "/" - replacement = "/var/log/pods/*$1/*.log" - } -} - -local.file_match "pods" { - path_targets = discovery.relabel.k8s.output -} -``` - -### Breaking change: `discovery_target_decode` has been removed from the River standard library - -The `discovery_target_decode` function was initially added to the River -standard library as an equivalent to Prometheus' file-based discovery and -HTTP-based discovery methods. - -However, the Prometheus discovery mechanisms have more functionality than -`discovery_target_decode`: - -* Prometheus' `file_sd_configs` can use many files based on pattern matching. -* Prometheus' `http_sd_configs` also support YAML files. - -Additionally, it is no longer an accepted pattern to have component-specific -functions to the River standard library. - -As a result, `discovery_target_decode` has been removed in favor of using -components. - -Old configuration example: - -```river -remote.http "example" { - url = URL_CONTAINING_TARGETS -} - -prometehus.scrape "example" { - targets = discovery_target_decode(remote.http.example.content) - forward_to = FORWARD_LIST -} -``` - -New configuration example: - -```river -discovery.http "example" { - url = URL_CONTAINING_TARGETS -} - -prometehus.scrape "example" { - targets = discovery.http.example.targets - forward_to = FORWARD_LIST -} -``` - -### Breaking change: The algorithm for the "hash" action of `otelcol.processor.attributes` has changed -The hash produced when using `action = "hash"` in the `otelcol.processor.attributes` flow component is now using the more secure SHA-256 algorithm. -The change was made in PR [#22831](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/22831) of opentelemetry-collector-contrib. - -### Breaking change: `otelcol.exporter.loki` now includes instrumentation scope in its output - -Additional `instrumentation_scope` information will be added to the OTLP log signal, like this: -``` -{ - "body": "Example log", - "traceid": "01020304000000000000000000000000", - "spanid": "0506070800000000", - "severity": "error", - "attributes": { - "attr1": "1", - "attr2": "2" - }, - "resources": { - "host.name": "something" - }, - "instrumentation_scope": { - "name": "example-logger-name", - "version": "v1" - } -} -``` - -### Breaking change: `otelcol.extension.jaeger_remote_sampling` removes the `/` HTTP endpoint - -The `/` HTTP endpoint was the same as the `/sampling` endpoint. The `/sampling` endpoint is still functional. -The change was made in PR [#18070](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18070) of opentelemetry-collector-contrib. - -### Breaking change: The `remote_sampling` block has been removed from `otelcol.receiver.jaeger` - -The `remote_sampling` block in `otelcol.receiver.jaeger` has been an undocumented no-op configuration for some time, and has now been removed. -Customers are advised to use `otelcol.extension.jaeger_remote_sampling` instead. - -### Deprecation: `otelcol.exporter.jaeger` has been deprecated and will be removed in {{% param "PRODUCT_NAME" %}} v0.38.0. - -This is because Jaeger supports OTLP directly and OpenTelemetry Collector is also removing its -[Jaeger receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/jaegerexporter). - -## v0.34 - -### Breaking change: `phlare.scrape` and `phlare.write` have been renamed to `pyroscope.scrape` and `pyroscope.scrape` - -Old configuration example: - -```river -phlare.write "staging" { - endpoint { - url = "http://phlare:4100" - } -} - -phlare.scrape "default" { - targets = [ - {"__address__" = "agent:12345", "app"="agent"}, - ] - forward_to = [phlare.write.staging.receiver] -} -``` - -New configuration example: - -```river -pyroscope.write "staging" { - endpoint { - url = "http://pyroscope:4100" - } -} - -pyroscope.scrape "default" { - targets = [ - {"__address__" = "agent:12345", "app"="agent"}, - ] - forward_to = [pyroscope.write.staging.receiver] -} -``` - -## v0.33 - -### Symbolic links in Docker containers removed - -We've removed the deprecated symbolic links to `/bin/agent*` in Docker -containers, as planned in v0.31. In case you're setting a custom entrypoint, -use the new binaries that are prefixed with `/bin/grafana*`. - -## v0.32 - -### Breaking change: `http_client_config` Flow blocks merged with parent blocks - -To reduce the amount of typing required to write Flow components, the arguments -and subblocks found in `http_client_config` have been merged with their parent -blocks: - -- `discovery.docker > http_client_config` is merged into the `discovery.docker` block. -- `discovery.kubernetes > http_client_config` is merged into the `discovery.kubernetes` block. -- `loki.source.kubernetes > client > http_client_config` is merged into the `client` block. -- `loki.source.podlogs > client > http_client_config` is merged into the `client` block. -- `loki.write > endpoint > http_client_config` is merged into the `endpoint` block. -- `mimir.rules.kubernetes > http_client_config` is merged into the `mimir.rules.kubernetes` block. -- `otelcol.receiver.opencensus > grpc` is merged into the `otelcol.receiver.opencensus` block. -- `otelcol.receiver.zipkin > http` is merged into the `otelcol.receiver.zipkin` block. -- `phlare.scrape > http_client_config` is merged into the `phlare.scrape` block. -- `phlare.write > endpoint > http_client_config` is merged into the `endpoint` block. -- `prometheus.remote_write > endpoint > http_client_config` is merged into the `endpoint` block. -- `prometheus.scrape > http_client_config` is merged into the `prometheus.scrape` block. - -Old configuration example: - -```river -prometheus.remote_write "example" { - endpoint { - url = URL - - http_client_config { - basic_auth { - username = BASIC_AUTH_USERNAME - password = BASIC_AUTH_PASSWORD - } - } - } -} -``` - -New configuration example: - -```river -prometheus.remote_write "example" { - endpoint { - url = URL - - basic_auth { - username = BASIC_AUTH_USERNAME - password = BASIC_AUTH_PASSWORD - } - } -} -``` - -### Breaking change: `loki.process` stage blocks combined into new blocks - -Previously, to add a stage to `loki.process`, two blocks were needed: a block -called `stage`, then an inner block for the stage being written. Stage blocks -are now a single block called `stage.STAGENAME`. - -Old configuration example: - -```river -loki.process "example" { - forward_to = RECEIVER_LIST - - stage { - docker {} - } - - stage { - json { - expressions = { output = "log", extra = "" } - } - } -} -``` - -New configuration example: - -```river -loki.process "example" { - forward_to = RECEIVER_LIST - - stage.docker {} - - stage.json { - expressions = { output = "log", extra = "" } - } -} -``` - -### Breaking change: `client_options` block renamed in `remote.s3` component - -To synchronize naming conventions between `remote.s3` and `remote.http`, the -`client_options` block has been renamed `client`. - -Old configuration example: - -```river -remote.s3 "example" { - path = S3_PATH - - client_options { - key = ACCESS_KEY - secret = KEY_SECRET - } -} -``` - -New configuration example: - -```river -remote.s3 "example" { - path = S3_PATH - - client { - key = ACCESS_KEY - secret = KEY_SECRET - } -} -``` - -### Breaking change: `prometheus.integration.node_exporter` component name changed - -The `prometheus.integration.node_exporter` component has been renamed to -`prometheus.exporter.unix`. `unix` was chosen as a name to approximate the -\*nix-like systems the exporter supports. - -Old configuration example: - -```river -prometheus.integration.node_exporter { } -``` - -New configuration example: - -```river -prometheus.exporter.unix { } -``` - -### Breaking change: support for `EXPERIMENTAL_ENABLE_FLOW` environment variable removed - -As first announced in v0.30.0, support for using the `EXPERIMENTAL_ENABLE_FLOW` -environment variable to enable Flow mode has been removed. - -To enable {{< param "PRODUCT_NAME" >}}, set the `AGENT_MODE` environment variable to `flow`. - -## v0.31 - -### Breaking change: binary names are now prefixed with `grafana-` - -As first announced in v0.29, the `agent` release binary name is now prefixed -with `grafana-`: - -- `agent` is now `grafana-agent`. - -For the `grafana/agent` Docker container, the entrypoint is now -`/bin/grafana-agent`. A symbolic link from `/bin/agent` to the new binary has -been added. - -Symbolic links will be removed in v0.33. Custom entrypoints must be -updated prior to v0.33 to use the new binaries before the symbolic links get -removed. - -## v0.30 - -### Deprecation: `EXPERIMENTAL_ENABLE_FLOW` environment variable changed - -As part of graduating {{< param "PRODUCT_NAME" >}} to beta, the -`EXPERIMENTAL_ENABLE_FLOW` environment variable is replaced by setting -`AGENT_MODE` to `flow`. - -Setting `EXPERIMENTAL_ENABLE_FLOW` to `1` or `true` is now deprecated and -support for it will be removed for the v0.32 release. - -## v0.29 - -### Deprecation: binary names will be prefixed with `grafana-` in v0.31.0 - -The binary name `agent` has been deprecated and will be renamed to -`grafana-agent` in the v0.31.0 release. - -As part of this change, the Docker containers for the v0.31.0 release will -include symbolic links from the old binary names to the new binary names. - -There is no action to take at this time. +[Changelog]: https://github.com/grafana/alloy/blob/main/CHANGELOG.md \ No newline at end of file diff --git a/docs/sources/stability.md b/docs/sources/stability.md index c21d549aeb..a038ea0eba 100644 --- a/docs/sources/stability.md +++ b/docs/sources/stability.md @@ -1,28 +1,24 @@ --- aliases: -- /docs/grafana-cloud/monitor-infrastructure/agent/stability/ -- /docs/grafana-cloud/send-data/agent/stability/ -canonical: https://grafana.com/docs/agent/latest/stability/ -description: Grafana Agent features fall into one of three stability categories, experimental, - beta, or stable +- /stability/ +canonical: https://grafana.com/docs/alloy/latest/stability/ +description: Grafana Alloy features fall into one of three stability categories, experimental, beta, or stable title: Stability weight: 600 --- # Stability -Stability of functionality usually refers to the stability of a _use case,_ -such as collecting and forwarding OpenTelemetry metrics. +Stability of functionality usually refers to the stability of a _use case,_ such as collecting and forwarding OpenTelemetry metrics. -Features within the Grafana Agent project will fall into one of three stability -categories: +Features within the {{< param "PRODUCT_NAME" >}} project will fall into one of three stability categories: * **Experimental**: A new use case is being explored. * **Beta**: Functionality covering a use case is being matured. * **Stable**: Functionality covering a use case is believed to be stable. -The default stability is stable; features will be explicitly marked as -experimental or beta if they are not stable. +The default stability is stable. +Features are explicitly marked as experimental or beta if they aren't stable. ## Experimental @@ -37,22 +33,18 @@ Unless removed, experimental features eventually graduate to beta. ## Beta -The **beta** stability category is used to denote a feature which is being -matured. +The **beta** stability category is used to denote a feature which is being matured. * Beta features are subject to occasional breaking changes. -* Beta features can be replaced by equivalent functionality that covers the - same use case. +* Beta features can be replaced by equivalent functionality that covers the same use case. * Beta features can be used without enabling feature flags. -Unless replaced with equivalent functionality, beta features eventually -graduate to stable. +Unless replaced with equivalent functionality, beta features eventually graduate to stable. ## Stable The **stable** stability category is used to denote a feature as stable. * Breaking changes to stable features are rare, and will be well-documented. -* If new functionality is introduced to replace existing stable functionality, - deprecation and removal timeline will be well-documented. +* If new functionality is introduced to replace existing stable functionality, deprecation and removal timeline will be well-documented. * Stable features can be used without enabling feature flags.