Releases: fluent/fluent-bit
Fluent Bit 2.0.9
Fluent Bit 2.0.8
Fluent Bit 1.9.10
Fluent Bit v1.9.10 - A maintenance release for 1.9 series.
NOTE: no packages are published. Summary of changes:
- 760956f out_s3: add store_dir_limit_size to limit S3 disk usage
- ec090c0 cloudwatch_logs: enable synchronous scheduler and async networking on cloudwatch
- 9f569b0 core: minimal synchronous scheduler
- cae9a8f parser: fix time zone offsets being dropped on Windows (#6368)
- d3a0697 out_es: fix es bulk buffer overrun
- 0f46cf3 filter_ecs: new filter for AWS ECS Metadata (#5898)
- 300206a out_datadog: fix/add error handling for all flb_sds calls (#5929)
- c5d57cc in_tcp: user friendly warn message for skipping records (#6061)
- 8e6a5be windows: enable ECS filter (#6269)
- c04c04a aws: different user agent on windows vs linux (#6325)
- a601857 multiline: do not permanently set ml group time to the first log time (backported from 2.0). (#6381)
- 27b9f6b in_tail: Fix multiline + Path_Key emitting empty logs Fix the incorrect buffer being appended to if both multiline and Path_Key are specified, leading to the records being ignored
- d7d6c56 out_forward: release buf when no connection available and time_as_integer is true (#6082)
- 82c9d0e packaging: lint fixes
- 79ede20 packaging: update raspbian container and pin cmake
Fluent Bit 2.0.6
Fluent Bit 2.0.5
Fluent Bit 2.0.4
Fluent Bit 2.0.3
This is a hotfix release for the v2 series that addresses an issue in Forward protocol and include an enhancement for Windows event log collection, notes:
Fluent Bit 2.0.2
https://fluentbit.io/announcements/v2.0.2/
This is a hotfix release that solve a corner case in metrics processing.
Fluent Bit 2.0.0
News
Fluent Bit v2.0.0 is the stable release!, new changes on this version:
Logs, Metrics, and Traces
Fluent Bit has always been agnostic of the data that its processes and moves around; one of the major use cases has been around Log Processing. Recently we introduced functionality to integrate Metrics and now with Fluent Bit v2, we are formally announcing the support for Traces.
As a vendor-agnostic solution, Fluent Bit natively integrates with protocols and other ecosystems, so having Metrics and Traces support means that we fully integrate with systems like Prometheus and OpenTelemetry.
Metrics
As part of the metrics collectors (inputs) and outputs supported for Metrics, allow us to collect and deliver metrics in a smooth way, a common example of these components are:
Input | Description |
---|---|
node_exporter_metrics | A plugin that implements a subset of host metrics collection like the known external tool Prometheus Node Exporter. |
nginx_metrics | Scrape Nginx metrics endpoints. It supports the OSS and Nginx+ enterprise editions. |
windows_exporter_metrics | Collect system metrics from a Windows system, for it support CPU collection. |
fluentbit_metrics | Collect internal Fluent Bit metrics and ingest them into the pipeline. |
Output | Description |
---|---|
prometheus_exporter | Expose metrics in an HTTP endpoint in Prometheus text format. This mechanism is commonly used when you want to scrape metrics available by Fluent Bit by other a third part solution like Prometheus Server. |
prometheus_remote_write | Deliver metrics to a remote endpoint by using the Prometheus Remote Write protocol. |
splunk | New support for Splunk Metrics (HEC) |
Influxdb | Send metrics to InfluxDB database |
One of the biggest addition on this are is the support for OpenTelemetry metrics, in the input and output side, more details below.
OpenTelemetry
OpenTelemetry is a framework, spec and protocol to unify the aspects of Telemetry collection and delivery. One of the major use cases in production is for Traces; Metrics and Logs support has been recently added.
In Fluent Bit, we are announcing full support for OpenTelemetry, where now we can receive and send telemetry data by using OpenTelemetry protocol:
OpenTelemetry input plugin
This new plugin, can receive OpenTelemetry Logs, Metrics and Traces, all of them are supported through this new implementation that allow easily integration with applications instrumented with OpenTelemetry SDKs.
OpenTelemetry output plugin
This new plugin allows to deliver any Log, Metric or Trace collected to a remote endpoint that supports OpenTelemetry, it could be any vendor platform or the OpenTelemetry collector.
Performance
The new threaded mechanism allows input plugins to run in a separate thread which helps to desaturate the main pipeline and move certain loads to a different CPU core. This feature must be enabled manually in the configuration of each input plugin by adding the threaded
property, e.g:
[INPUT]
name tail
path /var/log/containers/*.log
tag kube.*
threaded on
Full OpenSSL support / deprecate mbedTLS
With this new version, all the network transport layer that needs security enabled will use OpenSSL instead of mbedTLS: networking & crypto functions. From now on, mbedTLS is officially deprecated.
All interfaces and plugins that were using mbedTLS has been ported to use the new crypto and networking layer we have implemented.
TLS for input plugins
Using Fluent Bit as an aggregator is a common use case, but one component was missing: TLS support for ingestion (Transport Layer Security), which for many users it was a big blocker; the same problem has been reported by many Fluentd users looking forward to migrating to Fluent Bit.
In this new version, we are announcing full-native TLS capabilities for input plugins.
Plugins Changes
Loki (output)
In previous versions, the Loki project did not support out-of-order records, for performance reasons, Fluent Bit delivers messages in parallel which could cause exceptions on the transmission since some messages might not arrive in the order expected by Loki; to solve this problem, we used to force our Loki connector to just send one Chunk at a time, so the order was preserved.
Since newer versions of Loki no longer have this restriction, we are removing the restriction too from our side and allowing concurrency again, which is a boost in performance. No changes are needed.
Note: this improvement might be a breaking change if you are using an older version of Loki. If you are using **Loki >= 2.4 ** you are good to go.
To learn more about this Loki restriction and the enhancements, you can refer to the following blog post from Grafana team:
https://grafana.com/blog/2021/12/03/new-feature-in-loki-2.4-no-more-ordering-constraint/
Splunk + Metrics (output)
The current Splunk output connector now supports the delivery of Metrics. No changes in your configuration are needed, if you send metrics records to a Splunk output plugin, the data will be converted and delivered as expected by the Splunk HEC endpoint.
Developers Experience
There are many cases where a user would like to extend the functionality that Fluent Bit provides, usually for specific business reasons.
For customizations, we currently support plugins that can be written in C, filters in Lua, or output plugins in Golang. But understanding our user’s needs and the restrictions of this limited functionality, we are going beyond that.
Fluent Bit v2 comes with support to integrate more plugin types with Golang and WebAssembly. The following table describes the supported languages by plugin type.
Language | Input / Source | Filter | Output / Destination |
---|---|---|---|
C | Yes | Yes | Yes |
Golang | Yes | —- | Yes |
WebAssembly | Yes | Yes | —- |
TAP
One of the common questions from our users has been “how can I see the data flowing through a pipeline?”, usually, this comes up when there is a need to perform troubleshooting without stopping the agent.
TAP is a very advanced functionality that allows inspecting the data that flows through a pipeline, it can be enabled on runtime by just using a simple HTTP call.
More details about this functionality can be seen here in the official documentation:
Internal Metrics
Historically, a common way to extract internal metrics from Fluent Bit has been by enabling the built-in HTTP service which exposes the following endpoints in the REST API:
Endpoint | Description |
---|---|
/api/v1/metrics | JSON metrics for records and byte sizes associated. |
/api/v1/metrics/prometheus | Same metrics as /api/v1/metrics but formatted in Prometheus Text format. |
/api/v1/storage | JSON metrics for the storage component which expose metrics for Chunks. |
One of the restrictions of this interface, is that those metrics are not part of the pipeline, which means, you can only access them remotely. But what about if you want to send your metrics to a destination like Prometheus, InfluxDB or other ?.
During the release cycle of the previous series of Fluent Bit, we introduced a new mechanism to collect and process internal metrics through a new input plugin plugin called fluentbit_metrics
. This new mechanism allows to expose or send the metrics to a destination in many ways like:
- Prometheus Exporter
- Prometheus Remote Write
- InfluxDB
- OpenTelemetry
- Standard output interface (stdout)
New Storage Metrics
As mentioned, the Storage metrics originally were exposed in a different endpoint which forced the user to scrape another endpoint and also faced the restriction that storage metrics were only exposed in JSON.
Starting from Fluent Bit v2, fluentbit_metrics
exposes seven new metrics for the Storage layer:
Metrics name | Description |
---|---|
fluentbit_input_storage_overlimit |
It takes a value of 1 if the input plugin instance is over the limit imposed by a mem_buf_limit configuration. Otherwise is just set to 0. |
`fluentbit_input_storage_m... |