Releases: confluentinc/confluent-kafka-go
v1.4.0
confluent-kafka-go v1.4.0
- Added Transactional Producer API and full Exactly-Once-Semantics (EOS) support.
- A prebuilt version of the latest version of librdkafka is now bundled with the confluent-kafka-go client. A separate installation of librdkafka is NO LONGER REQUIRED or used.
- Added support for sending client (librdkafka) logs to
Logs()
channel. - Added
Consumer.Position()
to retrieve the current consumer offsets. - The
Error
type now has additional attributes, such asIsRetriable()
to deem if the errored operation can be retried. This is currently only exposed for the Transactional API. - Removed support for Go < 1.9
Transactional API
librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.
See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.
Bundled librdkafka
The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.
Supported platforms are:
- Mac OSX
- glibc-based Linux x64 (e.g., RedHat, Debian, etc) - lacks Kerberos/GSSAPI support
- musl-based Linux x64 (Alpine) - lacks Kerberos/GSSAPI support
These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic
to disable the builtin librdkafka and instead link your application dynamically to librdkafka.
librdkafka v1.4.0 changes
Full librdkafka v1.4.0 release notes.
Highlights:
- KIP-98: Transactional Producer API
- KIP-345: Static consumer group membership (by @rnpridgeon)
- KIP-511: Report client software name and version to broker
- SASL SCRAM security fixes.
v1.3.0
confluent-kafka-go v1.3.0
-
Purge messages API (by @khorshuheng at GoJek).
-
ClusterID and ControllerID APIs.
-
Go Modules support.
-
Fixed memory leak on calls to
NewAdminClient()
. (discovered by @gabeysunda) -
Requires librdkafka v1.3.0 or later
librdkafka v1.3.0 changes
Full librdkafka v1.3.0 release notes.
- KIP-392: Fetch messages from closest replica/follower (by @mhowlett).
- Experimental mock broker to make application and librdkafka development testing easier.
- Fixed consumer_lag in stats when consuming from broker versions <0.11.0.0 (regression in librdkafka v1.2.0).
v1.1.0
confluent-kafka-go v1.1.0
- OAUTHBEARER SASL authentication (KIP-255) by Ron Dagostini (@rondagostino) at StateStreet.
- Offset commit metadata (@damour, #353)
- Requires librdkafka v1.1.0 or later
Noteworthy librdkafka v1.1.0 changes
Full librdkafka v1.1.0 release notes.
- SASL OAUTHBEARER support (by @rondagostino at StateStreet)
- In-memory SSL certificates (PEM, DER, PKCS#12) support (by @noahdav at Microsoft)
- Pluggable broker SSL certificate verification callback (by @noahdav at Microsoft)
- Use Windows Root/CA SSL Certificate Store (by @noahdav at Microsoft)
ssl.endpoint.identification.algorithm=https
(off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.- Improved GSSAPI/Kerberos ticket refresh
Upgrade considerations
- Windows SSL users will no longer need to specify a CA certificate file/directory (
ssl.ca.location
), librdkafka will load the CA certs by default from the Windows Root Certificate Store. - SSL peer (broker) certificate verification is now enabled by default (disable with
enable.ssl.certificate.verification=false
) %{broker.name}
is no longer supported insasl.kerberos.kinit.cmd
since kinit refresh is no longer executed per broker, but per client instance.
SSL
New configuration properties:
ssl.key.pem
- client's private key as a string in PEM formatssl.certificate.pem
- client's public key as a string in PEM formatenable.ssl.certificate.verification
- enable(default)/disable OpenSSL's builtin broker certificate verification.enable.ssl.endpoint.identification.algorithm
- to verify the broker's hostname with its certificate (disabled by default).- The private key data is now securely cleared from memory after last use.
Enhancements
- Bump
message.timeout.ms
max value from 15 minutes to 24 days (@sarkanyi)
Fixes
- SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
- SASL GSSAPI/Kerberos: Changed
sasl.kerberos.kinit.cmd
to first attempt ticket refresh, then acquire. - SASL: Proper locking on broker name acquisition.
- Consumer:
max.poll.interval.ms
now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.
v1.0.0
confluent-kafka-go v1.0.0
This release adds support for librdkafka v1.0.0, featuring the EOS Idempotent Producer, Sparse connections, KIP-62 - max.poll.interval.ms
support, zstd, and more.
See the librdkafka v1.0.0 release notes for more information and upgrade considerations.
Go client enhancements
- Now requires librdkafka v1.0.0.
- A new
IsFatal()
function has been added toKafkaError
to help the application differentiate between temporary and fatal errors. Fatal errors are currently only triggered by the idempotent producer. - Added
kafka.NewError()
to make it possible to create error objects from user code / unit test (Artem Yarulin)
Go client fixes
- Deprecate the use of
default.topic.config
. Topic configuration should now be set on the standard ConfigMap. - Reject delivery.report.only.error=true on producer creation (#306)
- Avoid use of "Deprecated: " prefix (#268)
- PartitionEOF must now be explicitly enabled thru
enable.partition.eof
Make sure to check out the Idempotent Producer example
v0.11.6
Admin API
This release adds support for the Topic Admin API (KIP-4):
- Create and delete topics
- Increase topic partition count
- Read and modify broker and topic configuration
- Requires librdkafka >= v0.11.6
results, err := a.CreateTopics(
ctx,
// Multiple topics can be created simultaneously
// by providing additional TopicSpecification structs here.
[]kafka.TopicSpecification{{
Topic: "mynewtopic",
NumPartitions: 20,
ReplicationFactor: 3}})
More examples.
Fixes and enhancements
v0.11.4
Announcements
-
This release drops support for Golang < 1.7
-
Requires librdkafka v0.11.4 or later
Message header support
Support for Kafka message headers has been added (requires broker version >= v0.11.0).
When producing messages, pass a []kafka.Header
list:
err = p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(value),
Headers: []kafka.Header{{"myTestHeader", []byte("header values are binary")}},
}, deliveryChan)
Message headers are available to the consumer as Message.Headers
:
msg, err := c.ReadMessage(-1)
if err != nil {
fmt.Printf("%% Consumer error: %v\n", err)
continue
}
fmt.Printf("%% Message on %s:\n%s\n", msg.TopicPartition, string(msg.Value))
if msg.Headers != nil {
fmt.Printf("%% Headers: %v\n", msg.Headers)
}
Enhancements
- Message Headers support
- Close event channel when consumer is closed (#123 by @czchen)
- Added ReadMessage() convenience method to Consumer
- producer: Make events channel size configurable (@agis)
- Added Consumer.StoreOffsets() (#72)
- Added ConfigMap.Get() (#26)
- Added Pause() and Resume() APIs
- Added Consumer.Committed() API
- Added OffsetsForTimes() API to Consumer and Producer
Fixes
- Static builds should now work on both OSX and Linux (#137, #99)
- Update error constants from librdkafka
- Enable produce.offset.report by default (unless overriden)
- move test helpers that need testing pkg to _test.go file (@gwilym)
- Build and run-time checking of librdkafka version (#88)
- Remove gotos (@jadekler)
- Fix Producer Value&Key slice referencing to avoid cgo pointer checking failures (#24)
- Fix Go 1.10 build errors (drop
pkg-config --static ..
)