Skip to content

Commit

Permalink
Fix changelog
Browse files Browse the repository at this point in the history
  • Loading branch information
aidanleuck committed Dec 10, 2024
2 parents ddbca7b + b88ed5e commit 592e535
Show file tree
Hide file tree
Showing 145 changed files with 1,831 additions and 579 deletions.
55 changes: 43 additions & 12 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ Main (unreleased)

- Add `otelcol.exporter.syslog` component to export logs in syslog format (@dehaansa)

- (_Experimental_) Add a `database_observability.mysql` component to collect mysql performance data.
- (_Experimental_) Add a `database_observability.mysql` component to collect mysql performance data. (@cristiangreco & @matthewnolf)

- Add `otelcol.receiver.influxdb` to convert influx metric into OTEL. (@EHSchmitt4395)

### Enhancements

Expand All @@ -32,10 +34,13 @@ Main (unreleased)

- Add relevant golang environment variables to the support bundle (@dehaansa)

<<<<<<< HEAD
- Add support for server authentication to otelcol components. (@aidaleuc)

- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr)

=======
>>>>>>> origin
- Update mysqld_exporter from v0.15.0 to v0.16.0 (including 2ef168bf6), most notable changes: (@cristiangreco)
- Support MySQL 8.4 replicas syntax
- Fetch lock time and cpu time from performance schema
Expand All @@ -46,33 +51,57 @@ Main (unreleased)
- Change processlist query to support ONLY_FULL_GROUP_BY sql_mode
- Add perf_schema quantile columns to collector

- For sharding targets during clustering, `loki.source.podlogs` now only takes into account some labels. (@ptodev)
- Add three new stdlib functions to_base64, from_URLbase64 and to_URLbase64 (@ravishankar15)

### Bugfixes
- Fixed an issue in the `pyroscope.write` component to allow slashes in application names in the same way it is done in the Pyroscope push API (@marcsanmi)
- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa)

- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar)

- Fixed issue with reloading configuration and prometheus metrics duplication in `prometheus.write.queue`. (@mattdurham)

- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum)
- Updated `prometheus.write.queue` to fix issue with TTL comparing different scales of time. (@mattdurham)

- Fixed an issue in the `prometheus.operator.servicemonitors`, `prometheus.operator.podmonitors` and `prometheus.operator.probes` to support capitalized actions. (@QuentinBisson)

- Fixed an issue where the `otelcol.processor.interval` could not be used because the debug metrics were not set to default. (@wildum)

### Other changes

- Change the stability of the `livedebugging` feature from "experimental" to "generally available". (@wildum)

- Use Go 1.23.3 for builds. (@mattdurham)

v1.5.1
-----------------

### Enhancements

- Logs from underlying clustering library `memberlist` are now surfaced with correct level (@thampiotr)

- Allow setting `informer_sync_timeout` in prometheus.operator.* components. (@captncraig)

- For sharding targets during clustering, `loki.source.podlogs` now only takes into account some labels. (@ptodev)

### Bugfixes

- Fixed an issue in the `pyroscope.write` component to prevent TLS connection churn to Pyroscope when the `pyroscope.receive_http` clients don't request keepalive (@madaraszg-tulip)

- Fixed an issue in the `pyroscope.write` component with multiple endpoints not working correctly for forwarding profiles from `pyroscope.receive_http` (@madaraszg-tulip)

- Fixed a few race conditions that could lead to a deadlock when using `import` statements, which could lead to a memory leak on `/metrics` endpoint of an Alloy instance. (@thampiotr)

- Fix a race condition where the ui service was dependent on starting after the remotecfg service, which is not guaranteed. (@dehaansa & @erikbaranowski)

- Fixed an issue in the `otelcol.exporter.prometheus` component that would set series value incorrectly for stale metrics (@YusifAghalar)

- `loki.source.podlogs`: Fixed a bug which prevented clustering from working and caused duplicate logs to be sent.
The bug only happened when no `selector` or `namespace_selector` blocks were specified in the Alloy configuration. (@ptodev)

- Updated `prometheus.write.queue` to fix issue with TTL comparing different scales of time. (@mattdurham)

### Other changes
- Fixed an issue in the `pyroscope.write` component to allow slashes in application names in the same way it is done in the Pyroscope push API (@marcsanmi)

- Change the stability of the `livedebugging` feature from "experimental" to "generally available". (@wildum)
- Fixed a crash when updating the configuration of `remote.http`. (@kinolaev)

- Use Go 1.23.3 for builds. (@mattdurham)
- Fixed an issue in the `otelcol.processor.attribute` component where the actions `delete` and `hash` could not be used with the `pattern` argument. (@wildum)

- Fixed an issue in the `prometheus.exporter.postgres` component that would leak goroutines when the target was not reachable (@dehaansa)

v1.5.0
-----------------
Expand Down Expand Up @@ -270,6 +299,8 @@ v1.4.0

- Add the label `alloy_cluster` in the metric `alloy_config_hash` when the flag `cluster.name` is set to help differentiate between
configs from the same alloy cluster or different alloy clusters. (@wildum)

- Add support for discovering the cgroup path(s) of a process in `process.discovery`. (@mahendrapaipuri)

### Bugfixes

Expand Down
5 changes: 3 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,6 @@
/docs/sources/ @clayton-cornell

# Components:
/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers
/internal/component/beyla/ @marctc
/internal/component/pyroscope/ @grafana/grafana-alloy-profiling-maintainers
/internal/component/beyla/ @marctc
/internal/component/database_observability/ @cristiangreco @matthewnolf
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/_index.md.t
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ cards:
In addition, you can use {{< param "PRODUCT_NAME" >}} pipelines to do different tasks, such as configure alert rules in Loki and [Mimir][].
{{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and [Promtail][].
You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and pair it with your Grafana stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
{{< param "PRODUCT_NAME" >}} is flexible, and you can easily configure it to fit your needs in on-prem, cloud-only, or a mix of both.

{{< admonition type="tip" >}}
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/collect/ecs-opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,13 +84,13 @@ To create an ECS Task Definition for AWS Fargate with an ADOT collector, complet
* Use `--config=/etc/ecs/container-insights/otel-task-metrics-config.yaml` to use StatsD, OTLP, AWS X-Ray, and Container Resource utilization metrics.
1. Follow the ECS Fargate setup instructions to [create a task definition][task] using the template.

## Run Alloy directly in your instance, or as a Kubernetes sidecar
## Run {{% param "PRODUCT_NAME" %}} directly in your instance, or as a Kubernetes sidecar

SSH or connect to the Amazon ECS or AWS Fargate-managed container. Refer to [9 steps to SSH into an AWS Fargate managed container][steps] for more information about using SSH with Amazon ECS or AWS Fargate.

You can also use your own method to connect to the Amazon ECS or AWS Fargate-managed container as long as you can pass the parameters needed to install and configure {{< param "PRODUCT_NAME" >}}.

### Install Grafana Alloy
### Install {{% param "PRODUCT_NAME" %}}

After connecting to your instance, follow the {{< param "PRODUCT_NAME" >}} [installation][install], [configuration][configure] and [deployment][deploy] instructions.

Expand Down
12 changes: 6 additions & 6 deletions docs/sources/collect/opentelemetry-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ This topic describes how to:
Before components can receive OpenTelemetry data, you must have a component responsible for exporting the OpenTelemetry data.
An OpenTelemetry _exporter component_ is responsible for writing (exporting) OpenTelemetry data to an external system.

In this task, you use the [otelcol.exporter.otlp][] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP).
In this task, you use the [`otelcol.exporter.otlp`][otelcol.exporter.otlp] component to send OpenTelemetry data to a server using the OpenTelemetry Protocol (OTLP).
After an exporter component is defined, you can use other {{< param "PRODUCT_NAME" >}} components to forward data to it.

{{< admonition type="tip" >}}
Expand Down Expand Up @@ -137,7 +137,7 @@ otelcol.receiver.otlp "example" {
}
```

For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.exporter.otlp][].
For more information on writing OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.exporter.otlp`][otelcol.exporter.otlp].

## Configure batching

Expand All @@ -146,7 +146,7 @@ Instead, data is usually sent to one or more _processor components_ that perform

Ensuring data is batched is a production-readiness step to improve data compression and reduce the number of outgoing network requests to external systems.

In this task, you configure an [otelcol.processor.batch][] component to batch data before sending it to the exporter.
In this task, you configure an [`otelcol.processor.batch`][otelcol.processor.batch] component to batch data before sending it to the exporter.

{{< admonition type="note" >}}
Refer to the list of available [Components][] for the full list of `otelcol.processor` components that you can use to process OpenTelemetry data.
Expand Down Expand Up @@ -210,14 +210,14 @@ otelcol.exporter.otlp "default" {
}
```

For more information on configuring OpenTelemetry data batching, refer to [otelcol.processor.batch][].
For more information on configuring OpenTelemetry data batching, refer to [`otelcol.processor.batch`][otelcol.processor.batch].

## Configure an OpenTelemetry Protocol receiver

You can configure {{< param "PRODUCT_NAME" >}} to receive OpenTelemetry metrics, logs, and traces.
An OpenTelemetry _receiver_ component is responsible for receiving OpenTelemetry data from an external system.

In this task, you use the [otelcol.receiver.otlp][] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP).
In this task, you use the [`otelcol.receiver.otlp`][otelcol.receiver.otlp] component to receive OpenTelemetry data over the network using the OpenTelemetry Protocol (OTLP).
You can configure a receiver component to forward received data to other {{< param "PRODUCT_NAME" >}} components.

> Refer to the list of available [Components][] for the full list of
Expand Down Expand Up @@ -312,7 +312,7 @@ otelcol.exporter.otlp "default" {
}
```

For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [otelcol.receiver.otlp][].
For more information on receiving OpenTelemetry data using the OpenTelemetry Protocol, refer to [`otelcol.receiver.otlp`][otelcol.receiver.otlp].

[OpenTelemetry]: https://opentelemetry.io
[Configure an OpenTelemetry Protocol exporter]: #configure-an-opentelemetry-protocol-exporter
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/configure/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This page describes how to apply a new configuration to {{< param "PRODUCT_NAME"
It assumes that:

- You have [installed {{< param "PRODUCT_NAME" >}} on Kubernetes using the Helm chart][k8s-install].
- You already have a new {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation.
- You already have a {{< param "PRODUCT_NAME" >}} configuration that you want to apply to your Helm chart installation.

Refer to [Collect and forward data][collect] for information about configuring {{< param "PRODUCT_NAME" >}} to collect and forward data.

Expand All @@ -25,15 +25,15 @@ Refer to [Collect and forward data][collect] for information about configuring {

To modify {{< param "PRODUCT_NAME" >}}'s Helm chart configuration, perform the following steps:

1. Create a local `values.yaml` file with a new Helm chart configuration.
1. Create a local `values.yaml` file with a Helm chart configuration.

1. You can use your own copy of the values file or download a copy of the
default [values.yaml][].
default [`values.yaml`][values.yaml].

1. Make changes to your `values.yaml` to customize settings for the
Helm chart.

Refer to the inline documentation in the default [values.yaml][] for more
Refer to the inline documentation in the default [`values.yaml`][values.yaml] for more
information about each option.

1. Run the following command in a terminal to upgrade your {{< param "PRODUCT_NAME" >}} installation:
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/configure/nonroot.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ You can configure a non-root user when you deploy {{< param "PRODUCT_NAME" >}} i
{{< admonition type="note" >}}
Running {{< param "PRODUCT_NAME" >}} as a non-root user won't work if you are using components like [beyla.ebpf][] that require root rights.

[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/
{{< /admonition >}}

To run {{< param "PRODUCT_NAME" >}} as a non-root user, configure a [security context][] for the {{< param "PRODUCT_NAME" >}} container. If you are using the [Grafana Helm chart][] you can add the following snippet to `values.yaml`:
Expand All @@ -45,6 +45,6 @@ Not really. The Linux kernel prevents Docker containers from accessing host reso
However, if there was a bug in the Linux kernel that allowed Docker containers to break out of the virtual environment, it would likely be easier to exploit this bug with a root user than with a non-root user. It's worth noting that the attacker would not only need to find such a Linux kernel bug, but would also need to find a way to make {{< param "PRODUCT_NAME" >}} exploit that bug.

[image]: https://hub.docker.com/r/grafana/alloy
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf
[beyla.ebpf]: ../../reference/components/beyla/beyla.ebpf/
[security context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[Grafana Helm chart]: ../../configure/kubernetes/#configure-the-helm-chart
2 changes: 1 addition & 1 deletion docs/sources/configure/windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,4 +116,4 @@ To expose the UI to other machines, complete the following steps:
To listen on all interfaces, replace _`<LISTEN_ADDR>`_ with `0.0.0.0`.

[UI]: ../../troubleshoot/debug/#alloy-ui
[environment]: ../../reference/cli/environment-variables
[environment]: ../../reference/cli/environment-variables/
4 changes: 2 additions & 2 deletions docs/sources/data-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ All newly reported data is documented in the CHANGELOG.

You can use the `--disable-reporting` [command line flag][] to disable the reporting and opt-out of the data collection.

[components]: ../get-started/components
[command line flag]: ../reference/cli/run
[components]: ../get-started/components/
[command line flag]: ../reference/cli/run/
16 changes: 8 additions & 8 deletions docs/sources/get-started/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ prometheus.scrape "default" {
}
```

A cluster state change is detected when a new node joins or an existing node leaves.
All participating components locally recalculate target ownership and re-balance the number of targets theyre scraping without explicitly communicating ownership over the network.
A cluster state change is detected when a node joins or a node leaves.
All participating components locally recalculate target ownership and re-balance the number of targets they're scraping without explicitly communicating ownership over the network.

Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}} deployments to distribute workload during peaks.
It also provides resiliency because targets are automatically picked up by one of the node peers if a node leaves.
Expand All @@ -50,20 +50,20 @@ It also provides resiliency because targets are automatically picked up by one o

Refer to component reference documentation to discover whether it supports clustering, such as:

- [prometheus.scrape][]
- [pyroscope.scrape][]
- [prometheus.operator.podmonitors][]
- [prometheus.operator.servicemonitors][]
- [`prometheus.scrape`][prometheus.scrape]
- [`pyroscope.scrape`][pyroscope.scrape]
- [`prometheus.operator.podmonitors`][prometheus.operator.podmonitors]
- [`prometheus.operator.servicemonitors`][prometheus.operator.servicemonitors]

## Cluster monitoring and troubleshooting

You can use the {{< param "PRODUCT_NAME" >}} UI [clustering page][] to monitor your cluster status.
Refer to [Debugging clustering issues][debugging] for additional troubleshooting information.
Refer to [Debug clustering issues][debugging] for additional troubleshooting information.

[run]: ../../reference/cli/run/#clustering
[prometheus.scrape]: ../../reference/components/prometheus/prometheus.scrape/#clustering-block
[pyroscope.scrape]: ../../reference/components/pyroscope/pyroscope.scrape/#clustering-block
[prometheus.operator.podmonitors]: ../../reference/components/prometheus/prometheus.operator.podmonitors/#clustering-block
[prometheus.operator.servicemonitors]: ../../reference/components/prometheus/prometheus.operator.servicemonitors/#clustering-block
[clustering page]: ../../troubleshoot/debug/#clustering-page
[debugging]: ../../troubleshoot/debug/#debugging-clustering-issues
[debugging]: ../../troubleshoot/debug/#debug-clustering-issues
6 changes: 3 additions & 3 deletions docs/sources/get-started/community_components.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ weight: 100

__Community components__ are [components][Components] implemented and maintained by the community.

While Grafana does not offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository.
While Grafana doesn't offer commercial support for these components, they still undergo acceptance and review by the {{< param "PRODUCT_NAME" >}} development team before being added to the repository.

To use these community components, you must explicitly pass the `--feature.community-components.enabled` flag to the `run` command.

__Community components__ don't have a stability level. They aren't covered by our [backward compatibility strategy][backward-compatibility].
__Community components__ don't have a stability level. They aren't covered by the [backward compatibility strategy][backward-compatibility].

{{< admonition type="warning" >}}
__Community components__ without a maintainer may be disabled or removed if the components prevent or block the development of {{< param "PRODUCT_NAME" >}}.
{{< /admonition >}}

[Components]: ../components/
[backward-compatibility]: ../../introduction/backward-compatibility/
[backward-compatibility]: ../../introduction/backward-compatibility/
Loading

0 comments on commit 592e535

Please sign in to comment.