diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index eab6034..4768ded 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -3,17 +3,32 @@ If you have never made an open source contribution before, here it's a quick guide: -1. Find an issue that you are interested in addressing or a feature that you would like to add. -1. Fork the repository associated with the issue to your local GitHub organization. This means that you will have a copy of the repository under your-GitHub-username/repository-name. -1. Clone the repository to your local machine using `git clone https://github.com/github-username/repository-name.git`. +1. Find an issue that you are interested in addressing or a feature that you + would like to add. +1. Fork the repository associated with the issue to your local GitHub + organization. This means that you will have a copy of the repository under + your-GitHub-username/repository-name. +1. Clone the repository to your local machine using + `git clone https://github.com/github-username/repository-name.git`. 1. Create a new branch for your fix using `git checkout -b branch-name-here`. -1. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. -1. Use `git add insert-paths-of-changed-files-here` to add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index. -1. Use `git commit -m "Insert a short message of the changes made here"` to store the contents of the index with a descriptive message. -1. Push the changes to the remote repository using `git push origin branch-name-here`. +1. Make the appropriate changes for the issue you are trying to address or the + feature that you want to add. +1. Use `git add insert-paths-of-changed-files-here` to add the file contents of + the changed files to the "snapshot" git uses to manage the state of the + project, also known as the index. +1. Use `git commit -m "Insert a short message of the changes made here"` to + store the contents of the index with a descriptive message. +1. Push the changes to the remote repository using + `git push origin branch-name-here`. 1. Submit a pull request to the upstream repository. -1. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so *"Added more log outputting to resolve #4352"*. -1. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! +1. Title the pull request with a short description of the changes made and the + issue or bug number associated with your change. For example, you can title + an issue like so _"Added more log outputting to resolve #4352"_. +1. In the description of the pull request, explain the changes that you made, + any issues you think exist with the pull request you made, and any questions + you have for the maintainer. It's OK if your pull request is not perfect (no + pull request is), the reviewer will be able to help you fix any problems and + improve it! 1. Wait for the pull request to be reviewed by a maintainer. 1. Make changes to the pull request if the reviewing maintainer recommends them. 1. Celebrate your success after your pull request is merged! diff --git a/README.md b/README.md index d23695f..29da0f8 100644 --- a/README.md +++ b/README.md @@ -67,8 +67,7 @@ go get -v github.com/zaphiro-technologies/protobuf/go@v0.0.7 ## Examples -For your convenience, in the [examples](examples) folder we provide -Go code to: +For your convenience, in the [examples](examples) folder we provide Go code to: - Produce and consume measurements (uses RabbitMQ streams). - Produce and consume faults (uses RabbitMQ exchanges). @@ -133,8 +132,8 @@ need to set-up the dependencies listed in [Requirements](#requirements). Protocol buffers are versioned (current version is v1), and should be developed following best practices, as implemented by [Buf](https://buf.build) and defined -in [Protobuf programming -guides](https://protobuf.dev/programming-guides/dos-donts/). +in +[Protobuf programming guides](https://protobuf.dev/programming-guides/dos-donts/). In particular, it is important - even more within the same version - to preserve compatibility, to avoid services breaking up. diff --git a/examples/go/vendor/github.com/google/uuid/CHANGELOG.md b/examples/go/vendor/github.com/google/uuid/CHANGELOG.md index 7ec5ac7..1c451bc 100644 --- a/examples/go/vendor/github.com/google/uuid/CHANGELOG.md +++ b/examples/go/vendor/github.com/google/uuid/CHANGELOG.md @@ -2,40 +2,46 @@ ## [1.6.0](https://github.com/google/uuid/compare/v1.5.0...v1.6.0) (2024-01-16) - ### Features -* add Max UUID constant ([#149](https://github.com/google/uuid/issues/149)) ([c58770e](https://github.com/google/uuid/commit/c58770eb495f55fe2ced6284f93c5158a62e53e3)) - +- add Max UUID constant ([#149](https://github.com/google/uuid/issues/149)) + ([c58770e](https://github.com/google/uuid/commit/c58770eb495f55fe2ced6284f93c5158a62e53e3)) ### Bug Fixes -* fix typo in version 7 uuid documentation ([#153](https://github.com/google/uuid/issues/153)) ([016b199](https://github.com/google/uuid/commit/016b199544692f745ffc8867b914129ecb47ef06)) -* Monotonicity in UUIDv7 ([#150](https://github.com/google/uuid/issues/150)) ([a2b2b32](https://github.com/google/uuid/commit/a2b2b32373ff0b1a312b7fdf6d38a977099698a6)) +- fix typo in version 7 uuid documentation + ([#153](https://github.com/google/uuid/issues/153)) + ([016b199](https://github.com/google/uuid/commit/016b199544692f745ffc8867b914129ecb47ef06)) +- Monotonicity in UUIDv7 ([#150](https://github.com/google/uuid/issues/150)) + ([a2b2b32](https://github.com/google/uuid/commit/a2b2b32373ff0b1a312b7fdf6d38a977099698a6)) ## [1.5.0](https://github.com/google/uuid/compare/v1.4.0...v1.5.0) (2023-12-12) - ### Features -* Validate UUID without creating new UUID ([#141](https://github.com/google/uuid/issues/141)) ([9ee7366](https://github.com/google/uuid/commit/9ee7366e66c9ad96bab89139418a713dc584ae29)) +- Validate UUID without creating new UUID + ([#141](https://github.com/google/uuid/issues/141)) + ([9ee7366](https://github.com/google/uuid/commit/9ee7366e66c9ad96bab89139418a713dc584ae29)) ## [1.4.0](https://github.com/google/uuid/compare/v1.3.1...v1.4.0) (2023-10-26) - ### Features -* UUIDs slice type with Strings() convenience method ([#133](https://github.com/google/uuid/issues/133)) ([cd5fbbd](https://github.com/google/uuid/commit/cd5fbbdd02f3e3467ac18940e07e062be1f864b4)) +- UUIDs slice type with Strings() convenience method + ([#133](https://github.com/google/uuid/issues/133)) + ([cd5fbbd](https://github.com/google/uuid/commit/cd5fbbdd02f3e3467ac18940e07e062be1f864b4)) ### Fixes -* Clarify that Parse's job is to parse but not necessarily validate strings. (Documents current behavior) +- Clarify that Parse's job is to parse but not necessarily validate strings. + (Documents current behavior) ## [1.3.1](https://github.com/google/uuid/compare/v1.3.0...v1.3.1) (2023-08-18) - ### Bug Fixes -* Use .EqualFold() to parse urn prefixed UUIDs ([#118](https://github.com/google/uuid/issues/118)) ([574e687](https://github.com/google/uuid/commit/574e6874943741fb99d41764c705173ada5293f0)) +- Use .EqualFold() to parse urn prefixed UUIDs + ([#118](https://github.com/google/uuid/issues/118)) + ([574e687](https://github.com/google/uuid/commit/574e6874943741fb99d41764c705173ada5293f0)) ## Changelog diff --git a/examples/go/vendor/github.com/google/uuid/CONTRIBUTING.md b/examples/go/vendor/github.com/google/uuid/CONTRIBUTING.md index a502fdc..3202300 100644 --- a/examples/go/vendor/github.com/google/uuid/CONTRIBUTING.md +++ b/examples/go/vendor/github.com/google/uuid/CONTRIBUTING.md @@ -4,7 +4,8 @@ We definitely welcome patches and contribution to this project! ### Tips -Commits must be formatted according to the [Conventional Commits Specification](https://www.conventionalcommits.org). +Commits must be formatted according to the +[Conventional Commits Specification](https://www.conventionalcommits.org). Always try to include a test case! If it is not possible or not necessary, please explain why in the pull request description. @@ -12,7 +13,8 @@ please explain why in the pull request description. ### Releasing Commits that would precipitate a SemVer change, as described in the Conventional -Commits Specification, will trigger [`release-please`](https://github.com/google-github-actions/release-please-action) +Commits Specification, will trigger +[`release-please`](https://github.com/google-github-actions/release-please-action) to create a release candidate pull request. Once submitted, `release-please` will create a release. diff --git a/examples/go/vendor/github.com/google/uuid/README.md b/examples/go/vendor/github.com/google/uuid/README.md index 3e9a618..586995d 100644 --- a/examples/go/vendor/github.com/google/uuid/README.md +++ b/examples/go/vendor/github.com/google/uuid/README.md @@ -1,21 +1,24 @@ # uuid + The uuid package generates and inspects UUIDs based on -[RFC 4122](https://datatracker.ietf.org/doc/html/rfc4122) -and DCE 1.1: Authentication and Security Services. +[RFC 4122](https://datatracker.ietf.org/doc/html/rfc4122) and DCE 1.1: +Authentication and Security Services. This package is based on the github.com/pborman/uuid package (previously named -code.google.com/p/go-uuid). It differs from these earlier packages in that -a UUID is a 16 byte array rather than a byte slice. One loss due to this -change is the ability to represent an invalid UUID (vs a NIL UUID). +code.google.com/p/go-uuid). It differs from these earlier packages in that a +UUID is a 16 byte array rather than a byte slice. One loss due to this change is +the ability to represent an invalid UUID (vs a NIL UUID). ###### Install + ```sh go get github.com/google/uuid ``` -###### Documentation +###### Documentation + [![Go Reference](https://pkg.go.dev/badge/github.com/google/uuid.svg)](https://pkg.go.dev/github.com/google/uuid) Full `go doc` style documentation for the package can be viewed online without -installing this package by using the GoDoc site here: +installing this package by using the GoDoc site here: http://pkg.go.dev/github.com/google/uuid diff --git a/examples/go/vendor/github.com/klauspost/compress/README.md b/examples/go/vendor/github.com/klauspost/compress/README.md index 05c7359..94f5a7a 100644 --- a/examples/go/vendor/github.com/klauspost/compress/README.md +++ b/examples/go/vendor/github.com/klauspost/compress/README.md @@ -1,700 +1,1167 @@ -# compress - -This package provides various compression algorithms. - -* [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and decompression in pure Go. -* [S2](https://github.com/klauspost/compress/tree/master/s2#s2-compression) is a high performance replacement for Snappy. -* Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) packages which can be used as a dropin replacement for [gzip](https://godoc.org/github.com/klauspost/compress/gzip), [zip](https://godoc.org/github.com/klauspost/compress/zip) and [zlib](https://godoc.org/github.com/klauspost/compress/zlib). -* [snappy](https://github.com/klauspost/compress/tree/master/snappy) is a drop-in replacement for `github.com/golang/snappy` offering better compression and concurrent streams. -* [huff0](https://github.com/klauspost/compress/tree/master/huff0) and [FSE](https://github.com/klauspost/compress/tree/master/fse) implementations for raw entropy encoding. -* [gzhttp](https://github.com/klauspost/compress/tree/master/gzhttp) Provides client and server wrappers for handling gzipped requests efficiently. -* [pgzip](https://github.com/klauspost/pgzip) is a separate package that provides a very fast parallel gzip implementation. - -[![Go Reference](https://pkg.go.dev/badge/klauspost/compress.svg)](https://pkg.go.dev/github.com/klauspost/compress?tab=subdirectories) -[![Go](https://github.com/klauspost/compress/actions/workflows/go.yml/badge.svg)](https://github.com/klauspost/compress/actions/workflows/go.yml) -[![Sourcegraph Badge](https://sourcegraph.com/github.com/klauspost/compress/-/badge.svg)](https://sourcegraph.com/github.com/klauspost/compress?badge) - -# changelog - -* Feb 5th, 2024 - [1.17.6](https://github.com/klauspost/compress/releases/tag/v1.17.6) - * zstd: Fix incorrect repeat coding in best mode https://github.com/klauspost/compress/pull/923 - * s2: Fix DecodeConcurrent deadlock on errors https://github.com/klauspost/compress/pull/925 - -* Jan 26th, 2024 - [v1.17.5](https://github.com/klauspost/compress/releases/tag/v1.17.5) - * flate: Fix reset with dictionary on custom window encodes https://github.com/klauspost/compress/pull/912 - * zstd: Add Frame header encoding and stripping https://github.com/klauspost/compress/pull/908 - * zstd: Limit better/best default window to 8MB https://github.com/klauspost/compress/pull/913 - * zstd: Speed improvements by @greatroar in https://github.com/klauspost/compress/pull/896 https://github.com/klauspost/compress/pull/910 - * s2: Fix callbacks for skippable blocks and disallow 0xfe (Padding) by @Jille in https://github.com/klauspost/compress/pull/916 https://github.com/klauspost/compress/pull/917 -https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/compress/pull/918 - -* Dec 1st, 2023 - [v1.17.4](https://github.com/klauspost/compress/releases/tag/v1.17.4) - * huff0: Speed up symbol counting by @greatroar in https://github.com/klauspost/compress/pull/887 - * huff0: Remove byteReader by @greatroar in https://github.com/klauspost/compress/pull/886 - * gzhttp: Allow overriding decompression on transport https://github.com/klauspost/compress/pull/892 - * gzhttp: Clamp compression level https://github.com/klauspost/compress/pull/890 - * gzip: Error out if reserved bits are set https://github.com/klauspost/compress/pull/891 - -* Nov 15th, 2023 - [v1.17.3](https://github.com/klauspost/compress/releases/tag/v1.17.3) - * fse: Fix max header size https://github.com/klauspost/compress/pull/881 - * zstd: Improve better/best compression https://github.com/klauspost/compress/pull/877 - * gzhttp: Fix missing content type on Close https://github.com/klauspost/compress/pull/883 - -* Oct 22nd, 2023 - [v1.17.2](https://github.com/klauspost/compress/releases/tag/v1.17.2) - * zstd: Fix rare *CORRUPTION* output in "best" mode. See https://github.com/klauspost/compress/pull/876 - -* Oct 14th, 2023 - [v1.17.1](https://github.com/klauspost/compress/releases/tag/v1.17.1) - * s2: Fix S2 "best" dictionary wrong encoding by @klauspost in https://github.com/klauspost/compress/pull/871 - * flate: Reduce allocations in decompressor and minor code improvements by @fakefloordiv in https://github.com/klauspost/compress/pull/869 - * s2: Fix EstimateBlockSize on 6&7 length input by @klauspost in https://github.com/klauspost/compress/pull/867 - -* Sept 19th, 2023 - [v1.17.0](https://github.com/klauspost/compress/releases/tag/v1.17.0) - * Add experimental dictionary builder https://github.com/klauspost/compress/pull/853 - * Add xerial snappy read/writer https://github.com/klauspost/compress/pull/838 - * flate: Add limited window compression https://github.com/klauspost/compress/pull/843 - * s2: Do 2 overlapping match checks https://github.com/klauspost/compress/pull/839 - * flate: Add amd64 assembly matchlen https://github.com/klauspost/compress/pull/837 - * gzip: Copy bufio.Reader on Reset by @thatguystone in https://github.com/klauspost/compress/pull/860 - -
- See changes to v1.16.x - - -* July 1st, 2023 - [v1.16.7](https://github.com/klauspost/compress/releases/tag/v1.16.7) - * zstd: Fix default level first dictionary encode https://github.com/klauspost/compress/pull/829 - * s2: add GetBufferCapacity() method by @GiedriusS in https://github.com/klauspost/compress/pull/832 - -* June 13, 2023 - [v1.16.6](https://github.com/klauspost/compress/releases/tag/v1.16.6) - * zstd: correctly ignore WithEncoderPadding(1) by @ianlancetaylor in https://github.com/klauspost/compress/pull/806 - * zstd: Add amd64 match length assembly https://github.com/klauspost/compress/pull/824 - * gzhttp: Handle informational headers by @rtribotte in https://github.com/klauspost/compress/pull/815 - * s2: Improve Better compression slightly https://github.com/klauspost/compress/pull/663 - -* Apr 16, 2023 - [v1.16.5](https://github.com/klauspost/compress/releases/tag/v1.16.5) - * zstd: readByte needs to use io.ReadFull by @jnoxon in https://github.com/klauspost/compress/pull/802 - * gzip: Fix WriterTo after initial read https://github.com/klauspost/compress/pull/804 - -* Apr 5, 2023 - [v1.16.4](https://github.com/klauspost/compress/releases/tag/v1.16.4) - * zstd: Improve zstd best efficiency by @greatroar and @klauspost in https://github.com/klauspost/compress/pull/784 - * zstd: Respect WithAllLitEntropyCompression https://github.com/klauspost/compress/pull/792 - * zstd: Fix amd64 not always detecting corrupt data https://github.com/klauspost/compress/pull/785 - * zstd: Various minor improvements by @greatroar in https://github.com/klauspost/compress/pull/788 https://github.com/klauspost/compress/pull/794 https://github.com/klauspost/compress/pull/795 - * s2: Fix huge block overflow https://github.com/klauspost/compress/pull/779 - * s2: Allow CustomEncoder fallback https://github.com/klauspost/compress/pull/780 - * gzhttp: Suppport ResponseWriter Unwrap() in gzhttp handler by @jgimenez in https://github.com/klauspost/compress/pull/799 - -* Mar 13, 2023 - [v1.16.1](https://github.com/klauspost/compress/releases/tag/v1.16.1) - * zstd: Speed up + improve best encoder by @greatroar in https://github.com/klauspost/compress/pull/776 - * gzhttp: Add optional [BREACH mitigation](https://github.com/klauspost/compress/tree/master/gzhttp#breach-mitigation). https://github.com/klauspost/compress/pull/762 https://github.com/klauspost/compress/pull/768 https://github.com/klauspost/compress/pull/769 https://github.com/klauspost/compress/pull/770 https://github.com/klauspost/compress/pull/767 - * s2: Add Intel LZ4s converter https://github.com/klauspost/compress/pull/766 - * zstd: Minor bug fixes https://github.com/klauspost/compress/pull/771 https://github.com/klauspost/compress/pull/772 https://github.com/klauspost/compress/pull/773 - * huff0: Speed up compress1xDo by @greatroar in https://github.com/klauspost/compress/pull/774 - -* Feb 26, 2023 - [v1.16.0](https://github.com/klauspost/compress/releases/tag/v1.16.0) - * s2: Add [Dictionary](https://github.com/klauspost/compress/tree/master/s2#dictionaries) support. https://github.com/klauspost/compress/pull/685 - * s2: Add Compression Size Estimate. https://github.com/klauspost/compress/pull/752 - * s2: Add support for custom stream encoder. https://github.com/klauspost/compress/pull/755 - * s2: Add LZ4 block converter. https://github.com/klauspost/compress/pull/748 - * s2: Support io.ReaderAt in ReadSeeker. https://github.com/klauspost/compress/pull/747 - * s2c/s2sx: Use concurrent decoding. https://github.com/klauspost/compress/pull/746 -
- -
- See changes to v1.15.x - -* Jan 21st, 2023 (v1.15.15) - * deflate: Improve level 7-9 by @klauspost in https://github.com/klauspost/compress/pull/739 - * zstd: Add delta encoding support by @greatroar in https://github.com/klauspost/compress/pull/728 - * zstd: Various speed improvements by @greatroar https://github.com/klauspost/compress/pull/741 https://github.com/klauspost/compress/pull/734 https://github.com/klauspost/compress/pull/736 https://github.com/klauspost/compress/pull/744 https://github.com/klauspost/compress/pull/743 https://github.com/klauspost/compress/pull/745 - * gzhttp: Add SuffixETag() and DropETag() options to prevent ETag collisions on compressed responses by @willbicks in https://github.com/klauspost/compress/pull/740 - -* Jan 3rd, 2023 (v1.15.14) - - * flate: Improve speed in big stateless blocks https://github.com/klauspost/compress/pull/718 - * zstd: Minor speed tweaks by @greatroar in https://github.com/klauspost/compress/pull/716 https://github.com/klauspost/compress/pull/720 - * export NoGzipResponseWriter for custom ResponseWriter wrappers by @harshavardhana in https://github.com/klauspost/compress/pull/722 - * s2: Add example for indexing and existing stream https://github.com/klauspost/compress/pull/723 - -* Dec 11, 2022 (v1.15.13) - * zstd: Add [MaxEncodedSize](https://pkg.go.dev/github.com/klauspost/compress@v1.15.13/zstd#Encoder.MaxEncodedSize) to encoder https://github.com/klauspost/compress/pull/691 - * zstd: Various tweaks and improvements https://github.com/klauspost/compress/pull/693 https://github.com/klauspost/compress/pull/695 https://github.com/klauspost/compress/pull/696 https://github.com/klauspost/compress/pull/701 https://github.com/klauspost/compress/pull/702 https://github.com/klauspost/compress/pull/703 https://github.com/klauspost/compress/pull/704 https://github.com/klauspost/compress/pull/705 https://github.com/klauspost/compress/pull/706 https://github.com/klauspost/compress/pull/707 https://github.com/klauspost/compress/pull/708 - -* Oct 26, 2022 (v1.15.12) - - * zstd: Tweak decoder allocs. https://github.com/klauspost/compress/pull/680 - * gzhttp: Always delete `HeaderNoCompression` https://github.com/klauspost/compress/pull/683 - -* Sept 26, 2022 (v1.15.11) - - * flate: Improve level 1-3 compression https://github.com/klauspost/compress/pull/678 - * zstd: Improve "best" compression by @nightwolfz in https://github.com/klauspost/compress/pull/677 - * zstd: Fix+reduce decompression allocations https://github.com/klauspost/compress/pull/668 - * zstd: Fix non-effective noescape tag https://github.com/klauspost/compress/pull/667 - -* Sept 16, 2022 (v1.15.10) - - * zstd: Add [WithDecodeAllCapLimit](https://pkg.go.dev/github.com/klauspost/compress@v1.15.10/zstd#WithDecodeAllCapLimit) https://github.com/klauspost/compress/pull/649 - * Add Go 1.19 - deprecate Go 1.16 https://github.com/klauspost/compress/pull/651 - * flate: Improve level 5+6 compression https://github.com/klauspost/compress/pull/656 - * zstd: Improve "better" compresssion https://github.com/klauspost/compress/pull/657 - * s2: Improve "best" compression https://github.com/klauspost/compress/pull/658 - * s2: Improve "better" compression. https://github.com/klauspost/compress/pull/635 - * s2: Slightly faster non-assembly decompression https://github.com/klauspost/compress/pull/646 - * Use arrays for constant size copies https://github.com/klauspost/compress/pull/659 - -* July 21, 2022 (v1.15.9) - - * zstd: Fix decoder crash on amd64 (no BMI) on invalid input https://github.com/klauspost/compress/pull/645 - * zstd: Disable decoder extended memory copies (amd64) due to possible crashes https://github.com/klauspost/compress/pull/644 - * zstd: Allow single segments up to "max decoded size" by @klauspost in https://github.com/klauspost/compress/pull/643 - -* July 13, 2022 (v1.15.8) - - * gzip: fix stack exhaustion bug in Reader.Read https://github.com/klauspost/compress/pull/641 - * s2: Add Index header trim/restore https://github.com/klauspost/compress/pull/638 - * zstd: Optimize seqdeq amd64 asm by @greatroar in https://github.com/klauspost/compress/pull/636 - * zstd: Improve decoder memcopy https://github.com/klauspost/compress/pull/637 - * huff0: Pass a single bitReader pointer to asm by @greatroar in https://github.com/klauspost/compress/pull/634 - * zstd: Branchless getBits for amd64 w/o BMI2 by @greatroar in https://github.com/klauspost/compress/pull/640 - * gzhttp: Remove header before writing https://github.com/klauspost/compress/pull/639 - -* June 29, 2022 (v1.15.7) - - * s2: Fix absolute forward seeks https://github.com/klauspost/compress/pull/633 - * zip: Merge upstream https://github.com/klauspost/compress/pull/631 - * zip: Re-add zip64 fix https://github.com/klauspost/compress/pull/624 - * zstd: translate fseDecoder.buildDtable into asm by @WojciechMula in https://github.com/klauspost/compress/pull/598 - * flate: Faster histograms https://github.com/klauspost/compress/pull/620 - * deflate: Use compound hcode https://github.com/klauspost/compress/pull/622 - -* June 3, 2022 (v1.15.6) - * s2: Improve coding for long, close matches https://github.com/klauspost/compress/pull/613 - * s2c: Add Snappy/S2 stream recompression https://github.com/klauspost/compress/pull/611 - * zstd: Always use configured block size https://github.com/klauspost/compress/pull/605 - * zstd: Fix incorrect hash table placement for dict encoding in default https://github.com/klauspost/compress/pull/606 - * zstd: Apply default config to ZipDecompressor without options https://github.com/klauspost/compress/pull/608 - * gzhttp: Exclude more common archive formats https://github.com/klauspost/compress/pull/612 - * s2: Add ReaderIgnoreCRC https://github.com/klauspost/compress/pull/609 - * s2: Remove sanity load on index creation https://github.com/klauspost/compress/pull/607 - * snappy: Use dedicated function for scoring https://github.com/klauspost/compress/pull/614 - * s2c+s2d: Use official snappy framed extension https://github.com/klauspost/compress/pull/610 - -* May 25, 2022 (v1.15.5) - * s2: Add concurrent stream decompression https://github.com/klauspost/compress/pull/602 - * s2: Fix final emit oob read crash on amd64 https://github.com/klauspost/compress/pull/601 - * huff0: asm implementation of Decompress1X by @WojciechMula https://github.com/klauspost/compress/pull/596 - * zstd: Use 1 less goroutine for stream decoding https://github.com/klauspost/compress/pull/588 - * zstd: Copy literal in 16 byte blocks when possible https://github.com/klauspost/compress/pull/592 - * zstd: Speed up when WithDecoderLowmem(false) https://github.com/klauspost/compress/pull/599 - * zstd: faster next state update in BMI2 version of decode by @WojciechMula in https://github.com/klauspost/compress/pull/593 - * huff0: Do not check max size when reading table. https://github.com/klauspost/compress/pull/586 - * flate: Inplace hashing for level 7-9 by @klauspost in https://github.com/klauspost/compress/pull/590 - - -* May 11, 2022 (v1.15.4) - * huff0: decompress directly into output by @WojciechMula in [#577](https://github.com/klauspost/compress/pull/577) - * inflate: Keep dict on stack [#581](https://github.com/klauspost/compress/pull/581) - * zstd: Faster decoding memcopy in asm [#583](https://github.com/klauspost/compress/pull/583) - * zstd: Fix ignored crc [#580](https://github.com/klauspost/compress/pull/580) - -* May 5, 2022 (v1.15.3) - * zstd: Allow to ignore checksum checking by @WojciechMula [#572](https://github.com/klauspost/compress/pull/572) - * s2: Fix incorrect seek for io.SeekEnd in [#575](https://github.com/klauspost/compress/pull/575) - -* Apr 26, 2022 (v1.15.2) - * zstd: Add x86-64 assembly for decompression on streams and blocks. Contributed by [@WojciechMula](https://github.com/WojciechMula). Typically 2x faster. [#528](https://github.com/klauspost/compress/pull/528) [#531](https://github.com/klauspost/compress/pull/531) [#545](https://github.com/klauspost/compress/pull/545) [#537](https://github.com/klauspost/compress/pull/537) - * zstd: Add options to ZipDecompressor and fixes [#539](https://github.com/klauspost/compress/pull/539) - * s2: Use sorted search for index [#555](https://github.com/klauspost/compress/pull/555) - * Minimum version is Go 1.16, added CI test on 1.18. - -* Mar 11, 2022 (v1.15.1) - * huff0: Add x86 assembly of Decode4X by @WojciechMula in [#512](https://github.com/klauspost/compress/pull/512) - * zstd: Reuse zip decoders in [#514](https://github.com/klauspost/compress/pull/514) - * zstd: Detect extra block data and report as corrupted in [#520](https://github.com/klauspost/compress/pull/520) - * zstd: Handle zero sized frame content size stricter in [#521](https://github.com/klauspost/compress/pull/521) - * zstd: Add stricter block size checks in [#523](https://github.com/klauspost/compress/pull/523) - -* Mar 3, 2022 (v1.15.0) - * zstd: Refactor decoder by @klauspost in [#498](https://github.com/klauspost/compress/pull/498) - * zstd: Add stream encoding without goroutines by @klauspost in [#505](https://github.com/klauspost/compress/pull/505) - * huff0: Prevent single blocks exceeding 16 bits by @klauspost in[#507](https://github.com/klauspost/compress/pull/507) - * flate: Inline literal emission by @klauspost in [#509](https://github.com/klauspost/compress/pull/509) - * gzhttp: Add zstd to transport by @klauspost in [#400](https://github.com/klauspost/compress/pull/400) - * gzhttp: Make content-type optional by @klauspost in [#510](https://github.com/klauspost/compress/pull/510) - -Both compression and decompression now supports "synchronous" stream operations. This means that whenever "concurrency" is set to 1, they will operate without spawning goroutines. - -Stream decompression is now faster on asynchronous, since the goroutine allocation much more effectively splits the workload. On typical streams this will typically use 2 cores fully for decompression. When a stream has finished decoding no goroutines will be left over, so decoders can now safely be pooled and still be garbage collected. - -While the release has been extensively tested, it is recommended to testing when upgrading. - -
- -
- See changes to v1.14.x - -* Feb 22, 2022 (v1.14.4) - * flate: Fix rare huffman only (-2) corruption. [#503](https://github.com/klauspost/compress/pull/503) - * zip: Update deprecated CreateHeaderRaw to correctly call CreateRaw by @saracen in [#502](https://github.com/klauspost/compress/pull/502) - * zip: don't read data descriptor early by @saracen in [#501](https://github.com/klauspost/compress/pull/501) #501 - * huff0: Use static decompression buffer up to 30% faster by @klauspost in [#499](https://github.com/klauspost/compress/pull/499) [#500](https://github.com/klauspost/compress/pull/500) - -* Feb 17, 2022 (v1.14.3) - * flate: Improve fastest levels compression speed ~10% more throughput. [#482](https://github.com/klauspost/compress/pull/482) [#489](https://github.com/klauspost/compress/pull/489) [#490](https://github.com/klauspost/compress/pull/490) [#491](https://github.com/klauspost/compress/pull/491) [#494](https://github.com/klauspost/compress/pull/494) [#478](https://github.com/klauspost/compress/pull/478) - * flate: Faster decompression speed, ~5-10%. [#483](https://github.com/klauspost/compress/pull/483) - * s2: Faster compression with Go v1.18 and amd64 microarch level 3+. [#484](https://github.com/klauspost/compress/pull/484) [#486](https://github.com/klauspost/compress/pull/486) - -* Jan 25, 2022 (v1.14.2) - * zstd: improve header decoder by @dsnet [#476](https://github.com/klauspost/compress/pull/476) - * zstd: Add bigger default blocks [#469](https://github.com/klauspost/compress/pull/469) - * zstd: Remove unused decompression buffer [#470](https://github.com/klauspost/compress/pull/470) - * zstd: Fix logically dead code by @ningmingxiao [#472](https://github.com/klauspost/compress/pull/472) - * flate: Improve level 7-9 [#471](https://github.com/klauspost/compress/pull/471) [#473](https://github.com/klauspost/compress/pull/473) - * zstd: Add noasm tag for xxhash [#475](https://github.com/klauspost/compress/pull/475) - -* Jan 11, 2022 (v1.14.1) - * s2: Add stream index in [#462](https://github.com/klauspost/compress/pull/462) - * flate: Speed and efficiency improvements in [#439](https://github.com/klauspost/compress/pull/439) [#461](https://github.com/klauspost/compress/pull/461) [#455](https://github.com/klauspost/compress/pull/455) [#452](https://github.com/klauspost/compress/pull/452) [#458](https://github.com/klauspost/compress/pull/458) - * zstd: Performance improvement in [#420]( https://github.com/klauspost/compress/pull/420) [#456](https://github.com/klauspost/compress/pull/456) [#437](https://github.com/klauspost/compress/pull/437) [#467](https://github.com/klauspost/compress/pull/467) [#468](https://github.com/klauspost/compress/pull/468) - * zstd: add arm64 xxhash assembly in [#464](https://github.com/klauspost/compress/pull/464) - * Add garbled for binaries for s2 in [#445](https://github.com/klauspost/compress/pull/445) -
- -
- See changes to v1.13.x - -* Aug 30, 2021 (v1.13.5) - * gz/zlib/flate: Alias stdlib errors [#425](https://github.com/klauspost/compress/pull/425) - * s2: Add block support to commandline tools [#413](https://github.com/klauspost/compress/pull/413) - * zstd: pooledZipWriter should return Writers to the same pool [#426](https://github.com/klauspost/compress/pull/426) - * Removed golang/snappy as external dependency for tests [#421](https://github.com/klauspost/compress/pull/421) - -* Aug 12, 2021 (v1.13.4) - * Add [snappy replacement package](https://github.com/klauspost/compress/tree/master/snappy). - * zstd: Fix incorrect encoding in "best" mode [#415](https://github.com/klauspost/compress/pull/415) - -* Aug 3, 2021 (v1.13.3) - * zstd: Improve Best compression [#404](https://github.com/klauspost/compress/pull/404) - * zstd: Fix WriteTo error forwarding [#411](https://github.com/klauspost/compress/pull/411) - * gzhttp: Return http.HandlerFunc instead of http.Handler. Unlikely breaking change. [#406](https://github.com/klauspost/compress/pull/406) - * s2sx: Fix max size error [#399](https://github.com/klauspost/compress/pull/399) - * zstd: Add optional stream content size on reset [#401](https://github.com/klauspost/compress/pull/401) - * zstd: use SpeedBestCompression for level >= 10 [#410](https://github.com/klauspost/compress/pull/410) - -* Jun 14, 2021 (v1.13.1) - * s2: Add full Snappy output support [#396](https://github.com/klauspost/compress/pull/396) - * zstd: Add configurable [Decoder window](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithDecoderMaxWindow) size [#394](https://github.com/klauspost/compress/pull/394) - * gzhttp: Add header to skip compression [#389](https://github.com/klauspost/compress/pull/389) - * s2: Improve speed with bigger output margin [#395](https://github.com/klauspost/compress/pull/395) - -* Jun 3, 2021 (v1.13.0) - * Added [gzhttp](https://github.com/klauspost/compress/tree/master/gzhttp#gzip-handler) which allows wrapping HTTP servers and clients with GZIP compressors. - * zstd: Detect short invalid signatures [#382](https://github.com/klauspost/compress/pull/382) - * zstd: Spawn decoder goroutine only if needed. [#380](https://github.com/klauspost/compress/pull/380) -
- - -
- See changes to v1.12.x - -* May 25, 2021 (v1.12.3) - * deflate: Better/faster Huffman encoding [#374](https://github.com/klauspost/compress/pull/374) - * deflate: Allocate less for history. [#375](https://github.com/klauspost/compress/pull/375) - * zstd: Forward read errors [#373](https://github.com/klauspost/compress/pull/373) - -* Apr 27, 2021 (v1.12.2) - * zstd: Improve better/best compression [#360](https://github.com/klauspost/compress/pull/360) [#364](https://github.com/klauspost/compress/pull/364) [#365](https://github.com/klauspost/compress/pull/365) - * zstd: Add helpers to compress/decompress zstd inside zip files [#363](https://github.com/klauspost/compress/pull/363) - * deflate: Improve level 5+6 compression [#367](https://github.com/klauspost/compress/pull/367) - * s2: Improve better/best compression [#358](https://github.com/klauspost/compress/pull/358) [#359](https://github.com/klauspost/compress/pull/358) - * s2: Load after checking src limit on amd64. [#362](https://github.com/klauspost/compress/pull/362) - * s2sx: Limit max executable size [#368](https://github.com/klauspost/compress/pull/368) - -* Apr 14, 2021 (v1.12.1) - * snappy package removed. Upstream added as dependency. - * s2: Better compression in "best" mode [#353](https://github.com/klauspost/compress/pull/353) - * s2sx: Add stdin input and detect pre-compressed from signature [#352](https://github.com/klauspost/compress/pull/352) - * s2c/s2d: Add http as possible input [#348](https://github.com/klauspost/compress/pull/348) - * s2c/s2d/s2sx: Always truncate when writing files [#352](https://github.com/klauspost/compress/pull/352) - * zstd: Reduce memory usage further when using [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) [#346](https://github.com/klauspost/compress/pull/346) - * s2: Fix potential problem with amd64 assembly and profilers [#349](https://github.com/klauspost/compress/pull/349) -
- -
- See changes to v1.11.x - -* Mar 26, 2021 (v1.11.13) - * zstd: Big speedup on small dictionary encodes [#344](https://github.com/klauspost/compress/pull/344) [#345](https://github.com/klauspost/compress/pull/345) - * zstd: Add [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) encoder option [#336](https://github.com/klauspost/compress/pull/336) - * deflate: Improve entropy compression [#338](https://github.com/klauspost/compress/pull/338) - * s2: Clean up and minor performance improvement in best [#341](https://github.com/klauspost/compress/pull/341) - -* Mar 5, 2021 (v1.11.12) - * s2: Add `s2sx` binary that creates [self extracting archives](https://github.com/klauspost/compress/tree/master/s2#s2sx-self-extracting-archives). - * s2: Speed up decompression on non-assembly platforms [#328](https://github.com/klauspost/compress/pull/328) - -* Mar 1, 2021 (v1.11.9) - * s2: Add ARM64 decompression assembly. Around 2x output speed. [#324](https://github.com/klauspost/compress/pull/324) - * s2: Improve "better" speed and efficiency. [#325](https://github.com/klauspost/compress/pull/325) - * s2: Fix binaries. - -* Feb 25, 2021 (v1.11.8) - * s2: Fixed occational out-of-bounds write on amd64. Upgrade recommended. - * s2: Add AMD64 assembly for better mode. 25-50% faster. [#315](https://github.com/klauspost/compress/pull/315) - * s2: Less upfront decoder allocation. [#322](https://github.com/klauspost/compress/pull/322) - * zstd: Faster "compression" of incompressible data. [#314](https://github.com/klauspost/compress/pull/314) - * zip: Fix zip64 headers. [#313](https://github.com/klauspost/compress/pull/313) - -* Jan 14, 2021 (v1.11.7) - * Use Bytes() interface to get bytes across packages. [#309](https://github.com/klauspost/compress/pull/309) - * s2: Add 'best' compression option. [#310](https://github.com/klauspost/compress/pull/310) - * s2: Add ReaderMaxBlockSize, changes `s2.NewReader` signature to include varargs. [#311](https://github.com/klauspost/compress/pull/311) - * s2: Fix crash on small better buffers. [#308](https://github.com/klauspost/compress/pull/308) - * s2: Clean up decoder. [#312](https://github.com/klauspost/compress/pull/312) - -* Jan 7, 2021 (v1.11.6) - * zstd: Make decoder allocations smaller [#306](https://github.com/klauspost/compress/pull/306) - * zstd: Free Decoder resources when Reset is called with a nil io.Reader [#305](https://github.com/klauspost/compress/pull/305) - -* Dec 20, 2020 (v1.11.4) - * zstd: Add Best compression mode [#304](https://github.com/klauspost/compress/pull/304) - * Add header decoder [#299](https://github.com/klauspost/compress/pull/299) - * s2: Add uncompressed stream option [#297](https://github.com/klauspost/compress/pull/297) - * Simplify/speed up small blocks with known max size. [#300](https://github.com/klauspost/compress/pull/300) - * zstd: Always reset literal dict encoder [#303](https://github.com/klauspost/compress/pull/303) - -* Nov 15, 2020 (v1.11.3) - * inflate: 10-15% faster decompression [#293](https://github.com/klauspost/compress/pull/293) - * zstd: Tweak DecodeAll default allocation [#295](https://github.com/klauspost/compress/pull/295) - -* Oct 11, 2020 (v1.11.2) - * s2: Fix out of bounds read in "better" block compression [#291](https://github.com/klauspost/compress/pull/291) - -* Oct 1, 2020 (v1.11.1) - * zstd: Set allLitEntropy true in default configuration [#286](https://github.com/klauspost/compress/pull/286) - -* Sept 8, 2020 (v1.11.0) - * zstd: Add experimental compression [dictionaries](https://github.com/klauspost/compress/tree/master/zstd#dictionaries) [#281](https://github.com/klauspost/compress/pull/281) - * zstd: Fix mixed Write and ReadFrom calls [#282](https://github.com/klauspost/compress/pull/282) - * inflate/gz: Limit variable shifts, ~5% faster decompression [#274](https://github.com/klauspost/compress/pull/274) -
- -
- See changes to v1.10.x - -* July 8, 2020 (v1.10.11) - * zstd: Fix extra block when compressing with ReadFrom. [#278](https://github.com/klauspost/compress/pull/278) - * huff0: Also populate compression table when reading decoding table. [#275](https://github.com/klauspost/compress/pull/275) - -* June 23, 2020 (v1.10.10) - * zstd: Skip entropy compression in fastest mode when no matches. [#270](https://github.com/klauspost/compress/pull/270) - -* June 16, 2020 (v1.10.9): - * zstd: API change for specifying dictionaries. See [#268](https://github.com/klauspost/compress/pull/268) - * zip: update CreateHeaderRaw to handle zip64 fields. [#266](https://github.com/klauspost/compress/pull/266) - * Fuzzit tests removed. The service has been purchased and is no longer available. - -* June 5, 2020 (v1.10.8): - * 1.15x faster zstd block decompression. [#265](https://github.com/klauspost/compress/pull/265) - -* June 1, 2020 (v1.10.7): - * Added zstd decompression [dictionary support](https://github.com/klauspost/compress/tree/master/zstd#dictionaries) - * Increase zstd decompression speed up to 1.19x. [#259](https://github.com/klauspost/compress/pull/259) - * Remove internal reset call in zstd compression and reduce allocations. [#263](https://github.com/klauspost/compress/pull/263) - -* May 21, 2020: (v1.10.6) - * zstd: Reduce allocations while decoding. [#258](https://github.com/klauspost/compress/pull/258), [#252](https://github.com/klauspost/compress/pull/252) - * zstd: Stricter decompression checks. - -* April 12, 2020: (v1.10.5) - * s2-commands: Flush output when receiving SIGINT. [#239](https://github.com/klauspost/compress/pull/239) - -* Apr 8, 2020: (v1.10.4) - * zstd: Minor/special case optimizations. [#251](https://github.com/klauspost/compress/pull/251), [#250](https://github.com/klauspost/compress/pull/250), [#249](https://github.com/klauspost/compress/pull/249), [#247](https://github.com/klauspost/compress/pull/247) -* Mar 11, 2020: (v1.10.3) - * s2: Use S2 encoder in pure Go mode for Snappy output as well. [#245](https://github.com/klauspost/compress/pull/245) - * s2: Fix pure Go block encoder. [#244](https://github.com/klauspost/compress/pull/244) - * zstd: Added "better compression" mode. [#240](https://github.com/klauspost/compress/pull/240) - * zstd: Improve speed of fastest compression mode by 5-10% [#241](https://github.com/klauspost/compress/pull/241) - * zstd: Skip creating encoders when not needed. [#238](https://github.com/klauspost/compress/pull/238) - -* Feb 27, 2020: (v1.10.2) - * Close to 50% speedup in inflate (gzip/zip decompression). [#236](https://github.com/klauspost/compress/pull/236) [#234](https://github.com/klauspost/compress/pull/234) [#232](https://github.com/klauspost/compress/pull/232) - * Reduce deflate level 1-6 memory usage up to 59%. [#227](https://github.com/klauspost/compress/pull/227) - -* Feb 18, 2020: (v1.10.1) - * Fix zstd crash when resetting multiple times without sending data. [#226](https://github.com/klauspost/compress/pull/226) - * deflate: Fix dictionary use on level 1-6. [#224](https://github.com/klauspost/compress/pull/224) - * Remove deflate writer reference when closing. [#224](https://github.com/klauspost/compress/pull/224) - -* Feb 4, 2020: (v1.10.0) - * Add optional dictionary to [stateless deflate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc#StatelessDeflate). Breaking change, send `nil` for previous behaviour. [#216](https://github.com/klauspost/compress/pull/216) - * Fix buffer overflow on repeated small block deflate. [#218](https://github.com/klauspost/compress/pull/218) - * Allow copying content from an existing ZIP file without decompressing+compressing. [#214](https://github.com/klauspost/compress/pull/214) - * Added [S2](https://github.com/klauspost/compress/tree/master/s2#s2-compression) AMD64 assembler and various optimizations. Stream speed >10GB/s. [#186](https://github.com/klauspost/compress/pull/186) - -
- -
- See changes prior to v1.10.0 - -* Jan 20,2020 (v1.9.8) Optimize gzip/deflate with better size estimates and faster table generation. [#207](https://github.com/klauspost/compress/pull/207) by [luyu6056](https://github.com/luyu6056), [#206](https://github.com/klauspost/compress/pull/206). -* Jan 11, 2020: S2 Encode/Decode will use provided buffer if capacity is big enough. [#204](https://github.com/klauspost/compress/pull/204) -* Jan 5, 2020: (v1.9.7) Fix another zstd regression in v1.9.5 - v1.9.6 removed. -* Jan 4, 2020: (v1.9.6) Regression in v1.9.5 fixed causing corrupt zstd encodes in rare cases. -* Jan 4, 2020: Faster IO in [s2c + s2d commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) compression/decompression. [#192](https://github.com/klauspost/compress/pull/192) -* Dec 29, 2019: Removed v1.9.5 since fuzz tests showed a compatibility problem with the reference zstandard decoder. -* Dec 29, 2019: (v1.9.5) zstd: 10-20% faster block compression. [#199](https://github.com/klauspost/compress/pull/199) -* Dec 29, 2019: [zip](https://godoc.org/github.com/klauspost/compress/zip) package updated with latest Go features -* Dec 29, 2019: zstd: Single segment flag condintions tweaked. [#197](https://github.com/klauspost/compress/pull/197) -* Dec 18, 2019: s2: Faster compression when ReadFrom is used. [#198](https://github.com/klauspost/compress/pull/198) -* Dec 10, 2019: s2: Fix repeat length output when just above at 16MB limit. -* Dec 10, 2019: zstd: Add function to get decoder as io.ReadCloser. [#191](https://github.com/klauspost/compress/pull/191) -* Dec 3, 2019: (v1.9.4) S2: limit max repeat length. [#188](https://github.com/klauspost/compress/pull/188) -* Dec 3, 2019: Add [WithNoEntropyCompression](https://godoc.org/github.com/klauspost/compress/zstd#WithNoEntropyCompression) to zstd [#187](https://github.com/klauspost/compress/pull/187) -* Dec 3, 2019: Reduce memory use for tests. Check for leaked goroutines. -* Nov 28, 2019 (v1.9.3) Less allocations in stateless deflate. -* Nov 28, 2019: 5-20% Faster huff0 decode. Impacts zstd as well. [#184](https://github.com/klauspost/compress/pull/184) -* Nov 12, 2019 (v1.9.2) Added [Stateless Compression](#stateless-compression) for gzip/deflate. -* Nov 12, 2019: Fixed zstd decompression of large single blocks. [#180](https://github.com/klauspost/compress/pull/180) -* Nov 11, 2019: Set default [s2c](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) block size to 4MB. -* Nov 11, 2019: Reduce inflate memory use by 1KB. -* Nov 10, 2019: Less allocations in deflate bit writer. -* Nov 10, 2019: Fix inconsistent error returned by zstd decoder. -* Oct 28, 2019 (v1.9.1) ztsd: Fix crash when compressing blocks. [#174](https://github.com/klauspost/compress/pull/174) -* Oct 24, 2019 (v1.9.0) zstd: Fix rare data corruption [#173](https://github.com/klauspost/compress/pull/173) -* Oct 24, 2019 zstd: Fix huff0 out of buffer write [#171](https://github.com/klauspost/compress/pull/171) and always return errors [#172](https://github.com/klauspost/compress/pull/172) -* Oct 10, 2019: Big deflate rewrite, 30-40% faster with better compression [#105](https://github.com/klauspost/compress/pull/105) - -
- -
- See changes prior to v1.9.0 - -* Oct 10, 2019: (v1.8.6) zstd: Allow partial reads to get flushed data. [#169](https://github.com/klauspost/compress/pull/169) -* Oct 3, 2019: Fix inconsistent results on broken zstd streams. -* Sep 25, 2019: Added `-rm` (remove source files) and `-q` (no output except errors) to `s2c` and `s2d` [commands](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) -* Sep 16, 2019: (v1.8.4) Add `s2c` and `s2d` [commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools). -* Sep 10, 2019: (v1.8.3) Fix s2 decoder [Skip](https://godoc.org/github.com/klauspost/compress/s2#Reader.Skip). -* Sep 7, 2019: zstd: Added [WithWindowSize](https://godoc.org/github.com/klauspost/compress/zstd#WithWindowSize), contributed by [ianwilkes](https://github.com/ianwilkes). -* Sep 5, 2019: (v1.8.2) Add [WithZeroFrames](https://godoc.org/github.com/klauspost/compress/zstd#WithZeroFrames) which adds full zero payload block encoding option. -* Sep 5, 2019: Lazy initialization of zstandard predefined en/decoder tables. -* Aug 26, 2019: (v1.8.1) S2: 1-2% compression increase in "better" compression mode. -* Aug 26, 2019: zstd: Check maximum size of Huffman 1X compressed literals while decoding. -* Aug 24, 2019: (v1.8.0) Added [S2 compression](https://github.com/klauspost/compress/tree/master/s2#s2-compression), a high performance replacement for Snappy. -* Aug 21, 2019: (v1.7.6) Fixed minor issues found by fuzzer. One could lead to zstd not decompressing. -* Aug 18, 2019: Add [fuzzit](https://fuzzit.dev/) continuous fuzzing. -* Aug 14, 2019: zstd: Skip incompressible data 2x faster. [#147](https://github.com/klauspost/compress/pull/147) -* Aug 4, 2019 (v1.7.5): Better literal compression. [#146](https://github.com/klauspost/compress/pull/146) -* Aug 4, 2019: Faster zstd compression. [#143](https://github.com/klauspost/compress/pull/143) [#144](https://github.com/klauspost/compress/pull/144) -* Aug 4, 2019: Faster zstd decompression. [#145](https://github.com/klauspost/compress/pull/145) [#143](https://github.com/klauspost/compress/pull/143) [#142](https://github.com/klauspost/compress/pull/142) -* July 15, 2019 (v1.7.4): Fix double EOF block in rare cases on zstd encoder. -* July 15, 2019 (v1.7.3): Minor speedup/compression increase in default zstd encoder. -* July 14, 2019: zstd decoder: Fix decompression error on multiple uses with mixed content. -* July 7, 2019 (v1.7.2): Snappy update, zstd decoder potential race fix. -* June 17, 2019: zstd decompression bugfix. -* June 17, 2019: fix 32 bit builds. -* June 17, 2019: Easier use in modules (less dependencies). -* June 9, 2019: New stronger "default" [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression mode. Matches zstd default compression ratio. -* June 5, 2019: 20-40% throughput in [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and better compression. -* June 5, 2019: deflate/gzip compression: Reduce memory usage of lower compression levels. -* June 2, 2019: Added [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression! -* May 25, 2019: deflate/gzip: 10% faster bit writer, mostly visible in lower levels. -* Apr 22, 2019: [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) decompression added. -* Aug 1, 2018: Added [huff0 README](https://github.com/klauspost/compress/tree/master/huff0#huff0-entropy-compression). -* Jul 8, 2018: Added [Performance Update 2018](#performance-update-2018) below. -* Jun 23, 2018: Merged [Go 1.11 inflate optimizations](https://go-review.googlesource.com/c/go/+/102235). Go 1.9 is now required. Backwards compatible version tagged with [v1.3.0](https://github.com/klauspost/compress/releases/tag/v1.3.0). -* Apr 2, 2018: Added [huff0](https://godoc.org/github.com/klauspost/compress/huff0) en/decoder. Experimental for now, API may change. -* Mar 4, 2018: Added [FSE Entropy](https://godoc.org/github.com/klauspost/compress/fse) en/decoder. Experimental for now, API may change. -* Nov 3, 2017: Add compression [Estimate](https://godoc.org/github.com/klauspost/compress#Estimate) function. -* May 28, 2017: Reduce allocations when resetting decoder. -* Apr 02, 2017: Change back to official crc32, since changes were merged in Go 1.7. -* Jan 14, 2017: Reduce stack pressure due to array copies. See [Issue #18625](https://github.com/golang/go/issues/18625). -* Oct 25, 2016: Level 2-4 have been rewritten and now offers significantly better performance than before. -* Oct 20, 2016: Port zlib changes from Go 1.7 to fix zlib writer issue. Please update. -* Oct 16, 2016: Go 1.7 changes merged. Apples to apples this package is a few percent faster, but has a significantly better balance between speed and compression per level. -* Mar 24, 2016: Always attempt Huffman encoding on level 4-7. This improves base 64 encoded data compression. -* Mar 24, 2016: Small speedup for level 1-3. -* Feb 19, 2016: Faster bit writer, level -2 is 15% faster, level 1 is 4% faster. -* Feb 19, 2016: Handle small payloads faster in level 1-3. -* Feb 19, 2016: Added faster level 2 + 3 compression modes. -* Feb 19, 2016: [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), so there is a more even progresssion in terms of compression. New default level is 5. -* Feb 14, 2016: Snappy: Merge upstream changes. -* Feb 14, 2016: Snappy: Fix aggressive skipping. -* Feb 14, 2016: Snappy: Update benchmark. -* Feb 13, 2016: Deflate: Fixed assembler problem that could lead to sub-optimal compression. -* Feb 12, 2016: Snappy: Added AMD64 SSE 4.2 optimizations to matching, which makes easy to compress material run faster. Typical speedup is around 25%. -* Feb 9, 2016: Added Snappy package fork. This version is 5-7% faster, much more on hard to compress content. -* Jan 30, 2016: Optimize level 1 to 3 by not considering static dictionary or storing uncompressed. ~4-5% speedup. -* Jan 16, 2016: Optimization on deflate level 1,2,3 compression. -* Jan 8 2016: Merge [CL 18317](https://go-review.googlesource.com/#/c/18317): fix reading, writing of zip64 archives. -* Dec 8 2015: Make level 1 and -2 deterministic even if write size differs. -* Dec 8 2015: Split encoding functions, so hashing and matching can potentially be inlined. 1-3% faster on AMD64. 5% faster on other platforms. -* Dec 8 2015: Fixed rare [one byte out-of bounds read](https://github.com/klauspost/compress/issues/20). Please update! -* Nov 23 2015: Optimization on token writer. ~2-4% faster. Contributed by [@dsnet](https://github.com/dsnet). -* Nov 20 2015: Small optimization to bit writer on 64 bit systems. -* Nov 17 2015: Fixed out-of-bound errors if the underlying Writer returned an error. See [#15](https://github.com/klauspost/compress/issues/15). -* Nov 12 2015: Added [io.WriterTo](https://golang.org/pkg/io/#WriterTo) support to gzip/inflate. -* Nov 11 2015: Merged [CL 16669](https://go-review.googlesource.com/#/c/16669/4): archive/zip: enable overriding (de)compressors per file -* Oct 15 2015: Added skipping on uncompressible data. Random data speed up >5x. - -
- -# deflate usage - -The packages are drop-in replacements for standard libraries. Simply replace the import path to use them: - -| old import | new import | Documentation -|--------------------|-----------------------------------------|--------------------| -| `compress/gzip` | `github.com/klauspost/compress/gzip` | [gzip](https://pkg.go.dev/github.com/klauspost/compress/gzip?tab=doc) -| `compress/zlib` | `github.com/klauspost/compress/zlib` | [zlib](https://pkg.go.dev/github.com/klauspost/compress/zlib?tab=doc) -| `archive/zip` | `github.com/klauspost/compress/zip` | [zip](https://pkg.go.dev/github.com/klauspost/compress/zip?tab=doc) -| `compress/flate` | `github.com/klauspost/compress/flate` | [flate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc) - -* Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) packages which can be used as a dropin replacement for [gzip](https://godoc.org/github.com/klauspost/compress/gzip), [zip](https://godoc.org/github.com/klauspost/compress/zip) and [zlib](https://godoc.org/github.com/klauspost/compress/zlib). - -You may also be interested in [pgzip](https://github.com/klauspost/pgzip), which is a drop in replacement for gzip, which support multithreaded compression on big files and the optimized [crc32](https://github.com/klauspost/crc32) package used by these packages. - -The packages contains the same as the standard library, so you can use the godoc for that: [gzip](http://golang.org/pkg/compress/gzip/), [zip](http://golang.org/pkg/archive/zip/), [zlib](http://golang.org/pkg/compress/zlib/), [flate](http://golang.org/pkg/compress/flate/). - -Currently there is only minor speedup on decompression (mostly CRC32 calculation). - -Memory usage is typically 1MB for a Writer. stdlib is in the same range. -If you expect to have a lot of concurrently allocated Writers consider using -the stateless compress described below. - -For compression performance, see: [this spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). - -To disable all assembly add `-tags=noasm`. This works across all packages. - -# Stateless compression - -This package offers stateless compression as a special option for gzip/deflate. -It will do compression but without maintaining any state between Write calls. - -This means there will be no memory kept between Write calls, but compression and speed will be suboptimal. - -This is only relevant in cases where you expect to run many thousands of compressors concurrently, -but with very little activity. This is *not* intended for regular web servers serving individual requests. - -Because of this, the size of actual Write calls will affect output size. - -In gzip, specify level `-3` / `gzip.StatelessCompression` to enable. - -For direct deflate use, NewStatelessWriter and StatelessDeflate are available. See [documentation](https://godoc.org/github.com/klauspost/compress/flate#NewStatelessWriter) - -A `bufio.Writer` can of course be used to control write sizes. For example, to use a 4KB buffer: - -```go - // replace 'ioutil.Discard' with your output. - gzw, err := gzip.NewWriterLevel(ioutil.Discard, gzip.StatelessCompression) - if err != nil { - return err - } - defer gzw.Close() - - w := bufio.NewWriterSize(gzw, 4096) - defer w.Flush() - - // Write to 'w' -``` - -This will only use up to 4KB in memory when the writer is idle. - -Compression is almost always worse than the fastest compression level -and each write will allocate (a little) memory. - -# Performance Update 2018 - -It has been a while since we have been looking at the speed of this package compared to the standard library, so I thought I would re-do my tests and give some overall recommendations based on the current state. All benchmarks have been performed with Go 1.10 on my Desktop Intel(R) Core(TM) i7-2600 CPU @3.40GHz. Since I last ran the tests, I have gotten more RAM, which means tests with big files are no longer limited by my SSD. - -The raw results are in my [updated spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). Due to cgo changes and upstream updates i could not get the cgo version of gzip to compile. Instead I included the [zstd](https://github.com/datadog/zstd) cgo implementation. If I get cgo gzip to work again, I might replace the results in the sheet. - -The columns to take note of are: *MB/s* - the throughput. *Reduction* - the data size reduction in percent of the original. *Rel Speed* relative speed compared to the standard library at the same level. *Smaller* - how many percent smaller is the compressed output compared to stdlib. Negative means the output was bigger. *Loss* means the loss (or gain) in compression as a percentage difference of the input. - -The `gzstd` (standard library gzip) and `gzkp` (this package gzip) only uses one CPU core. [`pgzip`](https://github.com/klauspost/pgzip), [`bgzf`](https://github.com/biogo/hts/tree/master/bgzf) uses all 4 cores. [`zstd`](https://github.com/DataDog/zstd) uses one core, and is a beast (but not Go, yet). - - -## Overall differences. - -There appears to be a roughly 5-10% speed advantage over the standard library when comparing at similar compression levels. - -The biggest difference you will see is the result of [re-balancing](https://blog.klauspost.com/rebalancing-deflate-compression-levels/) the compression levels. I wanted by library to give a smoother transition between the compression levels than the standard library. - -This package attempts to provide a more smooth transition, where "1" is taking a lot of shortcuts, "5" is the reasonable trade-off and "9" is the "give me the best compression", and the values in between gives something reasonable in between. The standard library has big differences in levels 1-4, but levels 5-9 having no significant gains - often spending a lot more time than can be justified by the achieved compression. - -There are links to all the test data in the [spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing) in the top left field on each tab. - -## Web Content - -This test set aims to emulate typical use in a web server. The test-set is 4GB data in 53k files, and is a mixture of (mostly) HTML, JS, CSS. - -Since level 1 and 9 are close to being the same code, they are quite close. But looking at the levels in-between the differences are quite big. - -Looking at level 6, this package is 88% faster, but will output about 6% more data. For a web server, this means you can serve 88% more data, but have to pay for 6% more bandwidth. You can draw your own conclusions on what would be the most expensive for your case. - -## Object files - -This test is for typical data files stored on a server. In this case it is a collection of Go precompiled objects. They are very compressible. - -The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression. - -The standard library seems suboptimal on level 3 and 4 - offering both worse compression and speed than level 6 & 7 of this package respectively. - -## Highly Compressible File - -This is a JSON file with very high redundancy. The reduction starts at 95% on level 1, so in real life terms we are dealing with something like a highly redundant stream of data, etc. - -It is definitely visible that we are dealing with specialized content here, so the results are very scattered. This package does not do very well at levels 1-4, but picks up significantly at level 5 and levels 7 and 8 offering great speed for the achieved compression. - -So if you know you content is extremely compressible you might want to go slightly higher than the defaults. The standard library has a huge gap between levels 3 and 4 in terms of speed (2.75x slowdown), so it offers little "middle ground". - -## Medium-High Compressible - -This is a pretty common test corpus: [enwik9](http://mattmahoney.net/dc/textdata.html). It contains the first 10^9 bytes of the English Wikipedia dump on Mar. 3, 2006. This is a very good test of typical text based compression and more data heavy streams. - -We see a similar picture here as in "Web Content". On equal levels some compression is sacrificed for more speed. Level 5 seems to be the best trade-off between speed and size, beating stdlib level 3 in both. - -## Medium Compressible - -I will combine two test sets, one [10GB file set](http://mattmahoney.net/dc/10gb.html) and a VM disk image (~8GB). Both contain different data types and represent a typical backup scenario. - -The most notable thing is how quickly the standard library drops to very low compression speeds around level 5-6 without any big gains in compression. Since this type of data is fairly common, this does not seem like good behavior. - - -## Un-compressible Content - -This is mainly a test of how good the algorithms are at detecting un-compressible input. The standard library only offers this feature with very conservative settings at level 1. Obviously there is no reason for the algorithms to try to compress input that cannot be compressed. The only downside is that it might skip some compressible data on false detections. - - -## Huffman only compression - -This compression library adds a special compression level, named `HuffmanOnly`, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character. - -This means that often used characters, like 'e' and ' ' (space) in text use the fewest bits to represent, and rare characters like '¤' takes more bits to represent. For more information see [wikipedia](https://en.wikipedia.org/wiki/Huffman_coding) or this nice [video](https://youtu.be/ZdooBTdW5bM). - -Since this type of compression has much less variance, the compression speed is mostly unaffected by the input data, and is usually more than *180MB/s* for a single core. - -The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression ratio can never be better than 8:1 (12.5%). - -The linear time compression can be used as a "better than nothing" mode, where you cannot risk the encoder to slow down on some content. For comparison, the size of the "Twain" text is *233460 bytes* (+29% vs. level 1) and encode speed is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 times speedup. - -For more information see my blog post on [Fast Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/). - -This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for gzip. - -# Other packages - -Here are other packages of good quality and pure Go (no cgo wrappers or autoconverted code): - -* [github.com/pierrec/lz4](https://github.com/pierrec/lz4) - strong multithreaded LZ4 compression. -* [github.com/cosnicolaou/pbzip2](https://github.com/cosnicolaou/pbzip2) - multithreaded bzip2 decompression. -* [github.com/dsnet/compress](https://github.com/dsnet/compress) - brotli decompression, bzip2 writer. -* [github.com/ronanh/intcomp](https://github.com/ronanh/intcomp) - Integer compression. -* [github.com/spenczar/fpc](https://github.com/spenczar/fpc) - Float compression. -* [github.com/minio/zipindex](https://github.com/minio/zipindex) - External ZIP directory index. -* [github.com/ybirader/pzip](https://github.com/ybirader/pzip) - Fast concurrent zip archiver and extractor. - -# license - -This code is licensed under the same conditions as the original Go code. See LICENSE file. +# compress + +This package provides various compression algorithms. + +- [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) + compression and decompression in pure Go. +- [S2](https://github.com/klauspost/compress/tree/master/s2#s2-compression) is a + high performance replacement for Snappy. +- Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) + packages which can be used as a dropin replacement for + [gzip](https://godoc.org/github.com/klauspost/compress/gzip), + [zip](https://godoc.org/github.com/klauspost/compress/zip) and + [zlib](https://godoc.org/github.com/klauspost/compress/zlib). +- [snappy](https://github.com/klauspost/compress/tree/master/snappy) is a + drop-in replacement for `github.com/golang/snappy` offering better compression + and concurrent streams. +- [huff0](https://github.com/klauspost/compress/tree/master/huff0) and + [FSE](https://github.com/klauspost/compress/tree/master/fse) implementations + for raw entropy encoding. +- [gzhttp](https://github.com/klauspost/compress/tree/master/gzhttp) Provides + client and server wrappers for handling gzipped requests efficiently. +- [pgzip](https://github.com/klauspost/pgzip) is a separate package that + provides a very fast parallel gzip implementation. + +[![Go Reference](https://pkg.go.dev/badge/klauspost/compress.svg)](https://pkg.go.dev/github.com/klauspost/compress?tab=subdirectories) +[![Go](https://github.com/klauspost/compress/actions/workflows/go.yml/badge.svg)](https://github.com/klauspost/compress/actions/workflows/go.yml) +[![Sourcegraph Badge](https://sourcegraph.com/github.com/klauspost/compress/-/badge.svg)](https://sourcegraph.com/github.com/klauspost/compress?badge) + +# changelog + +- Feb 5th, 2024 - + [1.17.6](https://github.com/klauspost/compress/releases/tag/v1.17.6) + - zstd: Fix incorrect repeat coding in best mode + https://github.com/klauspost/compress/pull/923 + - s2: Fix DecodeConcurrent deadlock on errors + https://github.com/klauspost/compress/pull/925 +- Jan 26th, 2024 - + [v1.17.5](https://github.com/klauspost/compress/releases/tag/v1.17.5) _ flate: + Fix reset with dictionary on custom window encodes + https://github.com/klauspost/compress/pull/912 _ zstd: Add Frame header + encoding and stripping https://github.com/klauspost/compress/pull/908 _ zstd: + Limit better/best default window to 8MB + https://github.com/klauspost/compress/pull/913 _ zstd: Speed improvements by + @greatroar in https://github.com/klauspost/compress/pull/896 + https://github.com/klauspost/compress/pull/910 \* s2: Fix callbacks for + skippable blocks and disallow 0xfe (Padding) by @Jille in + https://github.com/klauspost/compress/pull/916 + https://github.com/klauspost/compress/pull/917 + https://github.com/klauspost/compress/pull/919 + https://github.com/klauspost/compress/pull/918 + +- Dec 1st, 2023 - + [v1.17.4](https://github.com/klauspost/compress/releases/tag/v1.17.4) + + - huff0: Speed up symbol counting by @greatroar in + https://github.com/klauspost/compress/pull/887 + - huff0: Remove byteReader by @greatroar in + https://github.com/klauspost/compress/pull/886 + - gzhttp: Allow overriding decompression on transport + https://github.com/klauspost/compress/pull/892 + - gzhttp: Clamp compression level + https://github.com/klauspost/compress/pull/890 + - gzip: Error out if reserved bits are set + https://github.com/klauspost/compress/pull/891 + +- Nov 15th, 2023 - + [v1.17.3](https://github.com/klauspost/compress/releases/tag/v1.17.3) + + - fse: Fix max header size https://github.com/klauspost/compress/pull/881 + - zstd: Improve better/best compression + https://github.com/klauspost/compress/pull/877 + - gzhttp: Fix missing content type on Close + https://github.com/klauspost/compress/pull/883 + +- Oct 22nd, 2023 - + [v1.17.2](https://github.com/klauspost/compress/releases/tag/v1.17.2) + + - zstd: Fix rare _CORRUPTION_ output in "best" mode. See + https://github.com/klauspost/compress/pull/876 + +- Oct 14th, 2023 - + [v1.17.1](https://github.com/klauspost/compress/releases/tag/v1.17.1) + + - s2: Fix S2 "best" dictionary wrong encoding by @klauspost in + https://github.com/klauspost/compress/pull/871 + - flate: Reduce allocations in decompressor and minor code improvements by + @fakefloordiv in https://github.com/klauspost/compress/pull/869 + - s2: Fix EstimateBlockSize on 6&7 length input by @klauspost in + https://github.com/klauspost/compress/pull/867 + +- Sept 19th, 2023 - + [v1.17.0](https://github.com/klauspost/compress/releases/tag/v1.17.0) + - Add experimental dictionary builder + https://github.com/klauspost/compress/pull/853 + - Add xerial snappy read/writer https://github.com/klauspost/compress/pull/838 + - flate: Add limited window compression + https://github.com/klauspost/compress/pull/843 + - s2: Do 2 overlapping match checks + https://github.com/klauspost/compress/pull/839 + - flate: Add amd64 assembly matchlen + https://github.com/klauspost/compress/pull/837 + - gzip: Copy bufio.Reader on Reset by @thatguystone in + https://github.com/klauspost/compress/pull/860 + +
+ See changes to v1.16.x + +- July 1st, 2023 - + [v1.16.7](https://github.com/klauspost/compress/releases/tag/v1.16.7) + + - zstd: Fix default level first dictionary encode + https://github.com/klauspost/compress/pull/829 + - s2: add GetBufferCapacity() method by @GiedriusS in + https://github.com/klauspost/compress/pull/832 + +- June 13, 2023 - + [v1.16.6](https://github.com/klauspost/compress/releases/tag/v1.16.6) + + - zstd: correctly ignore WithEncoderPadding(1) by @ianlancetaylor in + https://github.com/klauspost/compress/pull/806 + - zstd: Add amd64 match length assembly + https://github.com/klauspost/compress/pull/824 + - gzhttp: Handle informational headers by @rtribotte in + https://github.com/klauspost/compress/pull/815 + - s2: Improve Better compression slightly + https://github.com/klauspost/compress/pull/663 + +- Apr 16, 2023 - + [v1.16.5](https://github.com/klauspost/compress/releases/tag/v1.16.5) + + - zstd: readByte needs to use io.ReadFull by @jnoxon in + https://github.com/klauspost/compress/pull/802 + - gzip: Fix WriterTo after initial read + https://github.com/klauspost/compress/pull/804 + +- Apr 5, 2023 - + [v1.16.4](https://github.com/klauspost/compress/releases/tag/v1.16.4) + + - zstd: Improve zstd best efficiency by @greatroar and @klauspost in + https://github.com/klauspost/compress/pull/784 + - zstd: Respect WithAllLitEntropyCompression + https://github.com/klauspost/compress/pull/792 + - zstd: Fix amd64 not always detecting corrupt data + https://github.com/klauspost/compress/pull/785 + - zstd: Various minor improvements by @greatroar in + https://github.com/klauspost/compress/pull/788 + https://github.com/klauspost/compress/pull/794 + https://github.com/klauspost/compress/pull/795 + - s2: Fix huge block overflow https://github.com/klauspost/compress/pull/779 + - s2: Allow CustomEncoder fallback + https://github.com/klauspost/compress/pull/780 + - gzhttp: Suppport ResponseWriter Unwrap() in gzhttp handler by @jgimenez in + https://github.com/klauspost/compress/pull/799 + +- Mar 13, 2023 - + [v1.16.1](https://github.com/klauspost/compress/releases/tag/v1.16.1) + + - zstd: Speed up + improve best encoder by @greatroar in + https://github.com/klauspost/compress/pull/776 + - gzhttp: Add optional + [BREACH mitigation](https://github.com/klauspost/compress/tree/master/gzhttp#breach-mitigation). + https://github.com/klauspost/compress/pull/762 + https://github.com/klauspost/compress/pull/768 + https://github.com/klauspost/compress/pull/769 + https://github.com/klauspost/compress/pull/770 + https://github.com/klauspost/compress/pull/767 + - s2: Add Intel LZ4s converter https://github.com/klauspost/compress/pull/766 + - zstd: Minor bug fixes https://github.com/klauspost/compress/pull/771 + https://github.com/klauspost/compress/pull/772 + https://github.com/klauspost/compress/pull/773 + - huff0: Speed up compress1xDo by @greatroar in + https://github.com/klauspost/compress/pull/774 + +- Feb 26, 2023 - +[v1.16.0](https://github.com/klauspost/compress/releases/tag/v1.16.0) _ s2: Add +[Dictionary](https://github.com/klauspost/compress/tree/master/s2#dictionaries) +support. https://github.com/klauspost/compress/pull/685 _ s2: Add Compression +Size Estimate. https://github.com/klauspost/compress/pull/752 _ s2: Add support +for custom stream encoder. https://github.com/klauspost/compress/pull/755 _ s2: +Add LZ4 block converter. https://github.com/klauspost/compress/pull/748 _ s2: +Support io.ReaderAt in ReadSeeker. +https://github.com/klauspost/compress/pull/747 _ s2c/s2sx: Use concurrent +decoding. https://github.com/klauspost/compress/pull/746 +
+ +
+ See changes to v1.15.x + +* Jan 21st, 2023 (v1.15.15) + * deflate: Improve level 7-9 by @klauspost in https://github.com/klauspost/compress/pull/739 + * zstd: Add delta encoding support by @greatroar in https://github.com/klauspost/compress/pull/728 + * zstd: Various speed improvements by @greatroar https://github.com/klauspost/compress/pull/741 https://github.com/klauspost/compress/pull/734 https://github.com/klauspost/compress/pull/736 https://github.com/klauspost/compress/pull/744 https://github.com/klauspost/compress/pull/743 https://github.com/klauspost/compress/pull/745 + * gzhttp: Add SuffixETag() and DropETag() options to prevent ETag collisions on compressed responses by @willbicks in https://github.com/klauspost/compress/pull/740 + +- Jan 3rd, 2023 (v1.15.14) + + - flate: Improve speed in big stateless blocks + https://github.com/klauspost/compress/pull/718 + - zstd: Minor speed tweaks by @greatroar in + https://github.com/klauspost/compress/pull/716 + https://github.com/klauspost/compress/pull/720 + - export NoGzipResponseWriter for custom ResponseWriter wrappers by + @harshavardhana in https://github.com/klauspost/compress/pull/722 + - s2: Add example for indexing and existing stream + https://github.com/klauspost/compress/pull/723 + +- Dec 11, 2022 (v1.15.13) + + - zstd: Add + [MaxEncodedSize](https://pkg.go.dev/github.com/klauspost/compress@v1.15.13/zstd#Encoder.MaxEncodedSize) + to encoder https://github.com/klauspost/compress/pull/691 + - zstd: Various tweaks and improvements + https://github.com/klauspost/compress/pull/693 + https://github.com/klauspost/compress/pull/695 + https://github.com/klauspost/compress/pull/696 + https://github.com/klauspost/compress/pull/701 + https://github.com/klauspost/compress/pull/702 + https://github.com/klauspost/compress/pull/703 + https://github.com/klauspost/compress/pull/704 + https://github.com/klauspost/compress/pull/705 + https://github.com/klauspost/compress/pull/706 + https://github.com/klauspost/compress/pull/707 + https://github.com/klauspost/compress/pull/708 + +- Oct 26, 2022 (v1.15.12) + + - zstd: Tweak decoder allocs. https://github.com/klauspost/compress/pull/680 + - gzhttp: Always delete `HeaderNoCompression` + https://github.com/klauspost/compress/pull/683 + +- Sept 26, 2022 (v1.15.11) + + - flate: Improve level 1-3 compression + https://github.com/klauspost/compress/pull/678 + - zstd: Improve "best" compression by @nightwolfz in + https://github.com/klauspost/compress/pull/677 + - zstd: Fix+reduce decompression allocations + https://github.com/klauspost/compress/pull/668 + - zstd: Fix non-effective noescape tag + https://github.com/klauspost/compress/pull/667 + +- Sept 16, 2022 (v1.15.10) + + - zstd: Add + [WithDecodeAllCapLimit](https://pkg.go.dev/github.com/klauspost/compress@v1.15.10/zstd#WithDecodeAllCapLimit) + https://github.com/klauspost/compress/pull/649 + - Add Go 1.19 - deprecate Go 1.16 + https://github.com/klauspost/compress/pull/651 + - flate: Improve level 5+6 compression + https://github.com/klauspost/compress/pull/656 + - zstd: Improve "better" compresssion + https://github.com/klauspost/compress/pull/657 + - s2: Improve "best" compression + https://github.com/klauspost/compress/pull/658 + - s2: Improve "better" compression. + https://github.com/klauspost/compress/pull/635 + - s2: Slightly faster non-assembly decompression + https://github.com/klauspost/compress/pull/646 + - Use arrays for constant size copies + https://github.com/klauspost/compress/pull/659 + +- July 21, 2022 (v1.15.9) + + - zstd: Fix decoder crash on amd64 (no BMI) on invalid input + https://github.com/klauspost/compress/pull/645 + - zstd: Disable decoder extended memory copies (amd64) due to possible crashes + https://github.com/klauspost/compress/pull/644 + - zstd: Allow single segments up to "max decoded size" by @klauspost in + https://github.com/klauspost/compress/pull/643 + +- July 13, 2022 (v1.15.8) + + - gzip: fix stack exhaustion bug in Reader.Read + https://github.com/klauspost/compress/pull/641 + - s2: Add Index header trim/restore + https://github.com/klauspost/compress/pull/638 + - zstd: Optimize seqdeq amd64 asm by @greatroar in + https://github.com/klauspost/compress/pull/636 + - zstd: Improve decoder memcopy https://github.com/klauspost/compress/pull/637 + - huff0: Pass a single bitReader pointer to asm by @greatroar in + https://github.com/klauspost/compress/pull/634 + - zstd: Branchless getBits for amd64 w/o BMI2 by @greatroar in + https://github.com/klauspost/compress/pull/640 + - gzhttp: Remove header before writing + https://github.com/klauspost/compress/pull/639 + +- June 29, 2022 (v1.15.7) + + - s2: Fix absolute forward seeks + https://github.com/klauspost/compress/pull/633 + - zip: Merge upstream https://github.com/klauspost/compress/pull/631 + - zip: Re-add zip64 fix https://github.com/klauspost/compress/pull/624 + - zstd: translate fseDecoder.buildDtable into asm by @WojciechMula in + https://github.com/klauspost/compress/pull/598 + - flate: Faster histograms https://github.com/klauspost/compress/pull/620 + - deflate: Use compound hcode https://github.com/klauspost/compress/pull/622 + +- June 3, 2022 (v1.15.6) + + - s2: Improve coding for long, close matches + https://github.com/klauspost/compress/pull/613 + - s2c: Add Snappy/S2 stream recompression + https://github.com/klauspost/compress/pull/611 + - zstd: Always use configured block size + https://github.com/klauspost/compress/pull/605 + - zstd: Fix incorrect hash table placement for dict encoding in default + https://github.com/klauspost/compress/pull/606 + - zstd: Apply default config to ZipDecompressor without options + https://github.com/klauspost/compress/pull/608 + - gzhttp: Exclude more common archive formats + https://github.com/klauspost/compress/pull/612 + - s2: Add ReaderIgnoreCRC https://github.com/klauspost/compress/pull/609 + - s2: Remove sanity load on index creation + https://github.com/klauspost/compress/pull/607 + - snappy: Use dedicated function for scoring + https://github.com/klauspost/compress/pull/614 + - s2c+s2d: Use official snappy framed extension + https://github.com/klauspost/compress/pull/610 + +- May 25, 2022 (v1.15.5) + + - s2: Add concurrent stream decompression + https://github.com/klauspost/compress/pull/602 + - s2: Fix final emit oob read crash on amd64 + https://github.com/klauspost/compress/pull/601 + - huff0: asm implementation of Decompress1X by @WojciechMula + https://github.com/klauspost/compress/pull/596 + - zstd: Use 1 less goroutine for stream decoding + https://github.com/klauspost/compress/pull/588 + - zstd: Copy literal in 16 byte blocks when possible + https://github.com/klauspost/compress/pull/592 + - zstd: Speed up when WithDecoderLowmem(false) + https://github.com/klauspost/compress/pull/599 + - zstd: faster next state update in BMI2 version of decode by @WojciechMula in + https://github.com/klauspost/compress/pull/593 + - huff0: Do not check max size when reading table. + https://github.com/klauspost/compress/pull/586 + - flate: Inplace hashing for level 7-9 by @klauspost in + https://github.com/klauspost/compress/pull/590 + +- May 11, 2022 (v1.15.4) + + - huff0: decompress directly into output by @WojciechMula in + [#577](https://github.com/klauspost/compress/pull/577) + - inflate: Keep dict on stack + [#581](https://github.com/klauspost/compress/pull/581) + - zstd: Faster decoding memcopy in asm + [#583](https://github.com/klauspost/compress/pull/583) + - zstd: Fix ignored crc [#580](https://github.com/klauspost/compress/pull/580) + +- May 5, 2022 (v1.15.3) + + - zstd: Allow to ignore checksum checking by @WojciechMula + [#572](https://github.com/klauspost/compress/pull/572) + - s2: Fix incorrect seek for io.SeekEnd in + [#575](https://github.com/klauspost/compress/pull/575) + +- Apr 26, 2022 (v1.15.2) + + - zstd: Add x86-64 assembly for decompression on streams and blocks. + Contributed by [@WojciechMula](https://github.com/WojciechMula). Typically + 2x faster. [#528](https://github.com/klauspost/compress/pull/528) + [#531](https://github.com/klauspost/compress/pull/531) + [#545](https://github.com/klauspost/compress/pull/545) + [#537](https://github.com/klauspost/compress/pull/537) + - zstd: Add options to ZipDecompressor and fixes + [#539](https://github.com/klauspost/compress/pull/539) + - s2: Use sorted search for index + [#555](https://github.com/klauspost/compress/pull/555) + - Minimum version is Go 1.16, added CI test on 1.18. + +- Mar 11, 2022 (v1.15.1) + + - huff0: Add x86 assembly of Decode4X by @WojciechMula in + [#512](https://github.com/klauspost/compress/pull/512) + - zstd: Reuse zip decoders in + [#514](https://github.com/klauspost/compress/pull/514) + - zstd: Detect extra block data and report as corrupted in + [#520](https://github.com/klauspost/compress/pull/520) + - zstd: Handle zero sized frame content size stricter in + [#521](https://github.com/klauspost/compress/pull/521) + - zstd: Add stricter block size checks in + [#523](https://github.com/klauspost/compress/pull/523) + +- Mar 3, 2022 (v1.15.0) + - zstd: Refactor decoder by @klauspost in + [#498](https://github.com/klauspost/compress/pull/498) + - zstd: Add stream encoding without goroutines by @klauspost in + [#505](https://github.com/klauspost/compress/pull/505) + - huff0: Prevent single blocks exceeding 16 bits by @klauspost + in[#507](https://github.com/klauspost/compress/pull/507) + - flate: Inline literal emission by @klauspost in + [#509](https://github.com/klauspost/compress/pull/509) + - gzhttp: Add zstd to transport by @klauspost in + [#400](https://github.com/klauspost/compress/pull/400) + - gzhttp: Make content-type optional by @klauspost in + [#510](https://github.com/klauspost/compress/pull/510) + +Both compression and decompression now supports "synchronous" stream operations. +This means that whenever "concurrency" is set to 1, they will operate without +spawning goroutines. + +Stream decompression is now faster on asynchronous, since the goroutine +allocation much more effectively splits the workload. On typical streams this +will typically use 2 cores fully for decompression. When a stream has finished +decoding no goroutines will be left over, so decoders can now safely be pooled +and still be garbage collected. + +While the release has been extensively tested, it is recommended to testing when +upgrading. + +
+ +
+ See changes to v1.14.x + +* Feb 22, 2022 (v1.14.4) + * flate: Fix rare huffman only (-2) corruption. [#503](https://github.com/klauspost/compress/pull/503) + * zip: Update deprecated CreateHeaderRaw to correctly call CreateRaw by @saracen in [#502](https://github.com/klauspost/compress/pull/502) + * zip: don't read data descriptor early by @saracen in [#501](https://github.com/klauspost/compress/pull/501) #501 + * huff0: Use static decompression buffer up to 30% faster by @klauspost in [#499](https://github.com/klauspost/compress/pull/499) [#500](https://github.com/klauspost/compress/pull/500) + +- Feb 17, 2022 (v1.14.3) + + - flate: Improve fastest levels compression speed ~10% more throughput. + [#482](https://github.com/klauspost/compress/pull/482) + [#489](https://github.com/klauspost/compress/pull/489) + [#490](https://github.com/klauspost/compress/pull/490) + [#491](https://github.com/klauspost/compress/pull/491) + [#494](https://github.com/klauspost/compress/pull/494) + [#478](https://github.com/klauspost/compress/pull/478) + - flate: Faster decompression speed, ~5-10%. + [#483](https://github.com/klauspost/compress/pull/483) + - s2: Faster compression with Go v1.18 and amd64 microarch level 3+. + [#484](https://github.com/klauspost/compress/pull/484) + [#486](https://github.com/klauspost/compress/pull/486) + +- Jan 25, 2022 (v1.14.2) + + - zstd: improve header decoder by @dsnet + [#476](https://github.com/klauspost/compress/pull/476) + - zstd: Add bigger default blocks + [#469](https://github.com/klauspost/compress/pull/469) + - zstd: Remove unused decompression buffer + [#470](https://github.com/klauspost/compress/pull/470) + - zstd: Fix logically dead code by @ningmingxiao + [#472](https://github.com/klauspost/compress/pull/472) + - flate: Improve level 7-9 + [#471](https://github.com/klauspost/compress/pull/471) + [#473](https://github.com/klauspost/compress/pull/473) + - zstd: Add noasm tag for xxhash + [#475](https://github.com/klauspost/compress/pull/475) + +- Jan 11, 2022 (v1.14.1) _ s2: Add stream index in +[#462](https://github.com/klauspost/compress/pull/462) _ flate: Speed and +efficiency improvements in +[#439](https://github.com/klauspost/compress/pull/439) +[#461](https://github.com/klauspost/compress/pull/461) +[#455](https://github.com/klauspost/compress/pull/455) +[#452](https://github.com/klauspost/compress/pull/452) +[#458](https://github.com/klauspost/compress/pull/458) _ zstd: Performance +improvement in [#420](https://github.com/klauspost/compress/pull/420) +[#456](https://github.com/klauspost/compress/pull/456) +[#437](https://github.com/klauspost/compress/pull/437) +[#467](https://github.com/klauspost/compress/pull/467) +[#468](https://github.com/klauspost/compress/pull/468) _ zstd: add arm64 xxhash +assembly in [#464](https://github.com/klauspost/compress/pull/464) \* Add +garbled for binaries for s2 in +[#445](https://github.com/klauspost/compress/pull/445) +
+ +
+ See changes to v1.13.x + +* Aug 30, 2021 (v1.13.5) + * gz/zlib/flate: Alias stdlib errors [#425](https://github.com/klauspost/compress/pull/425) + * s2: Add block support to commandline tools [#413](https://github.com/klauspost/compress/pull/413) + * zstd: pooledZipWriter should return Writers to the same pool [#426](https://github.com/klauspost/compress/pull/426) + * Removed golang/snappy as external dependency for tests [#421](https://github.com/klauspost/compress/pull/421) + +- Aug 12, 2021 (v1.13.4) + + - Add + [snappy replacement package](https://github.com/klauspost/compress/tree/master/snappy). + - zstd: Fix incorrect encoding in "best" mode + [#415](https://github.com/klauspost/compress/pull/415) + +- Aug 3, 2021 (v1.13.3) + + - zstd: Improve Best compression + [#404](https://github.com/klauspost/compress/pull/404) + - zstd: Fix WriteTo error forwarding + [#411](https://github.com/klauspost/compress/pull/411) + - gzhttp: Return http.HandlerFunc instead of http.Handler. Unlikely breaking + change. [#406](https://github.com/klauspost/compress/pull/406) + - s2sx: Fix max size error + [#399](https://github.com/klauspost/compress/pull/399) + - zstd: Add optional stream content size on reset + [#401](https://github.com/klauspost/compress/pull/401) + - zstd: use SpeedBestCompression for level >= 10 + [#410](https://github.com/klauspost/compress/pull/410) + +- Jun 14, 2021 (v1.13.1) + + - s2: Add full Snappy output support + [#396](https://github.com/klauspost/compress/pull/396) + - zstd: Add configurable + [Decoder window](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithDecoderMaxWindow) + size [#394](https://github.com/klauspost/compress/pull/394) + - gzhttp: Add header to skip compression + [#389](https://github.com/klauspost/compress/pull/389) + - s2: Improve speed with bigger output margin + [#395](https://github.com/klauspost/compress/pull/395) + +- Jun 3, 2021 (v1.13.0) _ Added +[gzhttp](https://github.com/klauspost/compress/tree/master/gzhttp#gzip-handler) +which allows wrapping HTTP servers and clients with GZIP compressors. _ zstd: +Detect short invalid signatures +[#382](https://github.com/klauspost/compress/pull/382) \* zstd: Spawn decoder +goroutine only if needed. [#380](https://github.com/klauspost/compress/pull/380) +
+ +
+ See changes to v1.12.x + +* May 25, 2021 (v1.12.3) + * deflate: Better/faster Huffman encoding [#374](https://github.com/klauspost/compress/pull/374) + * deflate: Allocate less for history. [#375](https://github.com/klauspost/compress/pull/375) + * zstd: Forward read errors [#373](https://github.com/klauspost/compress/pull/373) + +- Apr 27, 2021 (v1.12.2) + + - zstd: Improve better/best compression + [#360](https://github.com/klauspost/compress/pull/360) + [#364](https://github.com/klauspost/compress/pull/364) + [#365](https://github.com/klauspost/compress/pull/365) + - zstd: Add helpers to compress/decompress zstd inside zip files + [#363](https://github.com/klauspost/compress/pull/363) + - deflate: Improve level 5+6 compression + [#367](https://github.com/klauspost/compress/pull/367) + - s2: Improve better/best compression + [#358](https://github.com/klauspost/compress/pull/358) + [#359](https://github.com/klauspost/compress/pull/358) + - s2: Load after checking src limit on amd64. + [#362](https://github.com/klauspost/compress/pull/362) + - s2sx: Limit max executable size + [#368](https://github.com/klauspost/compress/pull/368) + +- Apr 14, 2021 (v1.12.1) _ snappy package removed. Upstream added as dependency. +_ s2: Better compression in "best" mode +[#353](https://github.com/klauspost/compress/pull/353) _ s2sx: Add stdin input +and detect pre-compressed from signature +[#352](https://github.com/klauspost/compress/pull/352) _ s2c/s2d: Add http as +possible input [#348](https://github.com/klauspost/compress/pull/348) _ +s2c/s2d/s2sx: Always truncate when writing files +[#352](https://github.com/klauspost/compress/pull/352) _ zstd: Reduce memory +usage further when using +[WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) +[#346](https://github.com/klauspost/compress/pull/346) \* s2: Fix potential +problem with amd64 assembly and profilers +[#349](https://github.com/klauspost/compress/pull/349) +
+ +
+ See changes to v1.11.x + +* Mar 26, 2021 (v1.11.13) + * zstd: Big speedup on small dictionary encodes [#344](https://github.com/klauspost/compress/pull/344) [#345](https://github.com/klauspost/compress/pull/345) + * zstd: Add [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) encoder option [#336](https://github.com/klauspost/compress/pull/336) + * deflate: Improve entropy compression [#338](https://github.com/klauspost/compress/pull/338) + * s2: Clean up and minor performance improvement in best [#341](https://github.com/klauspost/compress/pull/341) + +- Mar 5, 2021 (v1.11.12) + + - s2: Add `s2sx` binary that creates + [self extracting archives](https://github.com/klauspost/compress/tree/master/s2#s2sx-self-extracting-archives). + - s2: Speed up decompression on non-assembly platforms + [#328](https://github.com/klauspost/compress/pull/328) + +- Mar 1, 2021 (v1.11.9) + + - s2: Add ARM64 decompression assembly. Around 2x output speed. + [#324](https://github.com/klauspost/compress/pull/324) + - s2: Improve "better" speed and efficiency. + [#325](https://github.com/klauspost/compress/pull/325) + - s2: Fix binaries. + +- Feb 25, 2021 (v1.11.8) + - s2: Fixed occational out-of-bounds write on amd64. Upgrade recommended. + - s2: Add AMD64 assembly for better mode. 25-50% faster. + [#315](https://github.com/klauspost/compress/pull/315) + - s2: Less upfront decoder allocation. + [#322](https://github.com/klauspost/compress/pull/322) + - zstd: Faster "compression" of incompressible data. + [#314](https://github.com/klauspost/compress/pull/314) + - zip: Fix zip64 headers. + [#313](https://github.com/klauspost/compress/pull/313) +- Jan 14, 2021 (v1.11.7) + + - Use Bytes() interface to get bytes across packages. + [#309](https://github.com/klauspost/compress/pull/309) + - s2: Add 'best' compression option. + [#310](https://github.com/klauspost/compress/pull/310) + - s2: Add ReaderMaxBlockSize, changes `s2.NewReader` signature to include + varargs. [#311](https://github.com/klauspost/compress/pull/311) + - s2: Fix crash on small better buffers. + [#308](https://github.com/klauspost/compress/pull/308) + - s2: Clean up decoder. [#312](https://github.com/klauspost/compress/pull/312) + +- Jan 7, 2021 (v1.11.6) + + - zstd: Make decoder allocations smaller + [#306](https://github.com/klauspost/compress/pull/306) + - zstd: Free Decoder resources when Reset is called with a nil io.Reader + [#305](https://github.com/klauspost/compress/pull/305) + +- Dec 20, 2020 (v1.11.4) + + - zstd: Add Best compression mode + [#304](https://github.com/klauspost/compress/pull/304) + - Add header decoder [#299](https://github.com/klauspost/compress/pull/299) + - s2: Add uncompressed stream option + [#297](https://github.com/klauspost/compress/pull/297) + - Simplify/speed up small blocks with known max size. + [#300](https://github.com/klauspost/compress/pull/300) + - zstd: Always reset literal dict encoder + [#303](https://github.com/klauspost/compress/pull/303) + +- Nov 15, 2020 (v1.11.3) + + - inflate: 10-15% faster decompression + [#293](https://github.com/klauspost/compress/pull/293) + - zstd: Tweak DecodeAll default allocation + [#295](https://github.com/klauspost/compress/pull/295) + +- Oct 11, 2020 (v1.11.2) + + - s2: Fix out of bounds read in "better" block compression + [#291](https://github.com/klauspost/compress/pull/291) + +- Oct 1, 2020 (v1.11.1) + + - zstd: Set allLitEntropy true in default configuration + [#286](https://github.com/klauspost/compress/pull/286) + +- Sept 8, 2020 (v1.11.0) _ zstd: Add experimental compression +[dictionaries](https://github.com/klauspost/compress/tree/master/zstd#dictionaries) +[#281](https://github.com/klauspost/compress/pull/281) _ zstd: Fix mixed Write +and ReadFrom calls [#282](https://github.com/klauspost/compress/pull/282) \* +inflate/gz: Limit variable shifts, ~5% faster decompression +[#274](https://github.com/klauspost/compress/pull/274) +
+ +
+ See changes to v1.10.x + +* July 8, 2020 (v1.10.11) + * zstd: Fix extra block when compressing with ReadFrom. [#278](https://github.com/klauspost/compress/pull/278) + * huff0: Also populate compression table when reading decoding table. [#275](https://github.com/klauspost/compress/pull/275) + +* June 23, 2020 (v1.10.10) + * zstd: Skip entropy compression in fastest mode when no matches. [#270](https://github.com/klauspost/compress/pull/270) + +* June 16, 2020 (v1.10.9): + * zstd: API change for specifying dictionaries. See [#268](https://github.com/klauspost/compress/pull/268) + * zip: update CreateHeaderRaw to handle zip64 fields. [#266](https://github.com/klauspost/compress/pull/266) + * Fuzzit tests removed. The service has been purchased and is no longer available. + +* June 5, 2020 (v1.10.8): + * 1.15x faster zstd block decompression. [#265](https://github.com/klauspost/compress/pull/265) + +* June 1, 2020 (v1.10.7): + * Added zstd decompression [dictionary support](https://github.com/klauspost/compress/tree/master/zstd#dictionaries) + * Increase zstd decompression speed up to 1.19x. [#259](https://github.com/klauspost/compress/pull/259) + * Remove internal reset call in zstd compression and reduce allocations. [#263](https://github.com/klauspost/compress/pull/263) + +* May 21, 2020: (v1.10.6) + * zstd: Reduce allocations while decoding. [#258](https://github.com/klauspost/compress/pull/258), [#252](https://github.com/klauspost/compress/pull/252) + * zstd: Stricter decompression checks. + +* April 12, 2020: (v1.10.5) + * s2-commands: Flush output when receiving SIGINT. [#239](https://github.com/klauspost/compress/pull/239) + +* Apr 8, 2020: (v1.10.4) + * zstd: Minor/special case optimizations. [#251](https://github.com/klauspost/compress/pull/251), [#250](https://github.com/klauspost/compress/pull/250), [#249](https://github.com/klauspost/compress/pull/249), [#247](https://github.com/klauspost/compress/pull/247) +* Mar 11, 2020: (v1.10.3) + * s2: Use S2 encoder in pure Go mode for Snappy output as well. [#245](https://github.com/klauspost/compress/pull/245) + * s2: Fix pure Go block encoder. [#244](https://github.com/klauspost/compress/pull/244) + * zstd: Added "better compression" mode. [#240](https://github.com/klauspost/compress/pull/240) + * zstd: Improve speed of fastest compression mode by 5-10% [#241](https://github.com/klauspost/compress/pull/241) + * zstd: Skip creating encoders when not needed. [#238](https://github.com/klauspost/compress/pull/238) + +* Feb 27, 2020: (v1.10.2) + * Close to 50% speedup in inflate (gzip/zip decompression). [#236](https://github.com/klauspost/compress/pull/236) [#234](https://github.com/klauspost/compress/pull/234) [#232](https://github.com/klauspost/compress/pull/232) + * Reduce deflate level 1-6 memory usage up to 59%. [#227](https://github.com/klauspost/compress/pull/227) + +* Feb 18, 2020: (v1.10.1) + * Fix zstd crash when resetting multiple times without sending data. [#226](https://github.com/klauspost/compress/pull/226) + * deflate: Fix dictionary use on level 1-6. [#224](https://github.com/klauspost/compress/pull/224) + * Remove deflate writer reference when closing. [#224](https://github.com/klauspost/compress/pull/224) + +* Feb 4, 2020: (v1.10.0) + * Add optional dictionary to [stateless deflate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc#StatelessDeflate). Breaking change, send `nil` for previous behaviour. [#216](https://github.com/klauspost/compress/pull/216) + * Fix buffer overflow on repeated small block deflate. [#218](https://github.com/klauspost/compress/pull/218) + * Allow copying content from an existing ZIP file without decompressing+compressing. [#214](https://github.com/klauspost/compress/pull/214) + * Added [S2](https://github.com/klauspost/compress/tree/master/s2#s2-compression) AMD64 assembler and various optimizations. Stream speed >10GB/s. [#186](https://github.com/klauspost/compress/pull/186) + +
+ +
+ See changes prior to v1.10.0 + +- Jan 20,2020 (v1.9.8) Optimize gzip/deflate with better size estimates and + faster table generation. + [#207](https://github.com/klauspost/compress/pull/207) by + [luyu6056](https://github.com/luyu6056), + [#206](https://github.com/klauspost/compress/pull/206). +- Jan 11, 2020: S2 Encode/Decode will use provided buffer if capacity is big + enough. [#204](https://github.com/klauspost/compress/pull/204) +- Jan 5, 2020: (v1.9.7) Fix another zstd regression in v1.9.5 - v1.9.6 removed. +- Jan 4, 2020: (v1.9.6) Regression in v1.9.5 fixed causing corrupt zstd encodes + in rare cases. +- Jan 4, 2020: Faster IO in + [s2c + s2d commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) + compression/decompression. + [#192](https://github.com/klauspost/compress/pull/192) +- Dec 29, 2019: Removed v1.9.5 since fuzz tests showed a compatibility problem + with the reference zstandard decoder. +- Dec 29, 2019: (v1.9.5) zstd: 10-20% faster block compression. + [#199](https://github.com/klauspost/compress/pull/199) +- Dec 29, 2019: [zip](https://godoc.org/github.com/klauspost/compress/zip) + package updated with latest Go features +- Dec 29, 2019: zstd: Single segment flag condintions tweaked. + [#197](https://github.com/klauspost/compress/pull/197) +- Dec 18, 2019: s2: Faster compression when ReadFrom is used. + [#198](https://github.com/klauspost/compress/pull/198) +- Dec 10, 2019: s2: Fix repeat length output when just above at 16MB limit. +- Dec 10, 2019: zstd: Add function to get decoder as io.ReadCloser. + [#191](https://github.com/klauspost/compress/pull/191) +- Dec 3, 2019: (v1.9.4) S2: limit max repeat length. + [#188](https://github.com/klauspost/compress/pull/188) +- Dec 3, 2019: Add + [WithNoEntropyCompression](https://godoc.org/github.com/klauspost/compress/zstd#WithNoEntropyCompression) + to zstd [#187](https://github.com/klauspost/compress/pull/187) +- Dec 3, 2019: Reduce memory use for tests. Check for leaked goroutines. +- Nov 28, 2019 (v1.9.3) Less allocations in stateless deflate. +- Nov 28, 2019: 5-20% Faster huff0 decode. Impacts zstd as well. + [#184](https://github.com/klauspost/compress/pull/184) +- Nov 12, 2019 (v1.9.2) Added [Stateless Compression](#stateless-compression) + for gzip/deflate. +- Nov 12, 2019: Fixed zstd decompression of large single blocks. + [#180](https://github.com/klauspost/compress/pull/180) +- Nov 11, 2019: Set default + [s2c](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) + block size to 4MB. +- Nov 11, 2019: Reduce inflate memory use by 1KB. +- Nov 10, 2019: Less allocations in deflate bit writer. +- Nov 10, 2019: Fix inconsistent error returned by zstd decoder. +- Oct 28, 2019 (v1.9.1) ztsd: Fix crash when compressing blocks. + [#174](https://github.com/klauspost/compress/pull/174) +- Oct 24, 2019 (v1.9.0) zstd: Fix rare data corruption + [#173](https://github.com/klauspost/compress/pull/173) +- Oct 24, 2019 zstd: Fix huff0 out of buffer write + [#171](https://github.com/klauspost/compress/pull/171) and always return + errors [#172](https://github.com/klauspost/compress/pull/172) +- Oct 10, 2019: Big deflate rewrite, 30-40% faster with better compression + [#105](https://github.com/klauspost/compress/pull/105) + +
+ +
+ See changes prior to v1.9.0 + +- Oct 10, 2019: (v1.8.6) zstd: Allow partial reads to get flushed data. + [#169](https://github.com/klauspost/compress/pull/169) +- Oct 3, 2019: Fix inconsistent results on broken zstd streams. +- Sep 25, 2019: Added `-rm` (remove source files) and `-q` (no output except + errors) to `s2c` and `s2d` + [commands](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) +- Sep 16, 2019: (v1.8.4) Add `s2c` and `s2d` + [commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools). +- Sep 10, 2019: (v1.8.3) Fix s2 decoder + [Skip](https://godoc.org/github.com/klauspost/compress/s2#Reader.Skip). +- Sep 7, 2019: zstd: Added + [WithWindowSize](https://godoc.org/github.com/klauspost/compress/zstd#WithWindowSize), + contributed by [ianwilkes](https://github.com/ianwilkes). +- Sep 5, 2019: (v1.8.2) Add + [WithZeroFrames](https://godoc.org/github.com/klauspost/compress/zstd#WithZeroFrames) + which adds full zero payload block encoding option. +- Sep 5, 2019: Lazy initialization of zstandard predefined en/decoder tables. +- Aug 26, 2019: (v1.8.1) S2: 1-2% compression increase in "better" compression + mode. +- Aug 26, 2019: zstd: Check maximum size of Huffman 1X compressed literals while + decoding. +- Aug 24, 2019: (v1.8.0) Added + [S2 compression](https://github.com/klauspost/compress/tree/master/s2#s2-compression), + a high performance replacement for Snappy. +- Aug 21, 2019: (v1.7.6) Fixed minor issues found by fuzzer. One could lead to + zstd not decompressing. +- Aug 18, 2019: Add [fuzzit](https://fuzzit.dev/) continuous fuzzing. +- Aug 14, 2019: zstd: Skip incompressible data 2x faster. + [#147](https://github.com/klauspost/compress/pull/147) +- Aug 4, 2019 (v1.7.5): Better literal compression. + [#146](https://github.com/klauspost/compress/pull/146) +- Aug 4, 2019: Faster zstd compression. + [#143](https://github.com/klauspost/compress/pull/143) + [#144](https://github.com/klauspost/compress/pull/144) +- Aug 4, 2019: Faster zstd decompression. + [#145](https://github.com/klauspost/compress/pull/145) + [#143](https://github.com/klauspost/compress/pull/143) + [#142](https://github.com/klauspost/compress/pull/142) +- July 15, 2019 (v1.7.4): Fix double EOF block in rare cases on zstd encoder. +- July 15, 2019 (v1.7.3): Minor speedup/compression increase in default zstd + encoder. +- July 14, 2019: zstd decoder: Fix decompression error on multiple uses with + mixed content. +- July 7, 2019 (v1.7.2): Snappy update, zstd decoder potential race fix. +- June 17, 2019: zstd decompression bugfix. +- June 17, 2019: fix 32 bit builds. +- June 17, 2019: Easier use in modules (less dependencies). +- June 9, 2019: New stronger "default" + [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) + compression mode. Matches zstd default compression ratio. +- June 5, 2019: 20-40% throughput in + [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) + compression and better compression. +- June 5, 2019: deflate/gzip compression: Reduce memory usage of lower + compression levels. +- June 2, 2019: Added + [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) + compression! +- May 25, 2019: deflate/gzip: 10% faster bit writer, mostly visible in lower + levels. +- Apr 22, 2019: + [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) + decompression added. +- Aug 1, 2018: Added + [huff0 README](https://github.com/klauspost/compress/tree/master/huff0#huff0-entropy-compression). +- Jul 8, 2018: Added [Performance Update 2018](#performance-update-2018) below. +- Jun 23, 2018: Merged + [Go 1.11 inflate optimizations](https://go-review.googlesource.com/c/go/+/102235). + Go 1.9 is now required. Backwards compatible version tagged with + [v1.3.0](https://github.com/klauspost/compress/releases/tag/v1.3.0). +- Apr 2, 2018: Added + [huff0](https://godoc.org/github.com/klauspost/compress/huff0) en/decoder. + Experimental for now, API may change. +- Mar 4, 2018: Added + [FSE Entropy](https://godoc.org/github.com/klauspost/compress/fse) en/decoder. + Experimental for now, API may change. +- Nov 3, 2017: Add compression + [Estimate](https://godoc.org/github.com/klauspost/compress#Estimate) function. +- May 28, 2017: Reduce allocations when resetting decoder. +- Apr 02, 2017: Change back to official crc32, since changes were merged in Go + 1.7. +- Jan 14, 2017: Reduce stack pressure due to array copies. See + [Issue #18625](https://github.com/golang/go/issues/18625). +- Oct 25, 2016: Level 2-4 have been rewritten and now offers significantly + better performance than before. +- Oct 20, 2016: Port zlib changes from Go 1.7 to fix zlib writer issue. Please + update. +- Oct 16, 2016: Go 1.7 changes merged. Apples to apples this package is a few + percent faster, but has a significantly better balance between speed and + compression per level. +- Mar 24, 2016: Always attempt Huffman encoding on level 4-7. This improves base + 64 encoded data compression. +- Mar 24, 2016: Small speedup for level 1-3. +- Feb 19, 2016: Faster bit writer, level -2 is 15% faster, level 1 is 4% faster. +- Feb 19, 2016: Handle small payloads faster in level 1-3. +- Feb 19, 2016: Added faster level 2 + 3 compression modes. +- Feb 19, 2016: + [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), + so there is a more even progresssion in terms of compression. New default + level is 5. +- Feb 14, 2016: Snappy: Merge upstream changes. +- Feb 14, 2016: Snappy: Fix aggressive skipping. +- Feb 14, 2016: Snappy: Update benchmark. +- Feb 13, 2016: Deflate: Fixed assembler problem that could lead to sub-optimal + compression. +- Feb 12, 2016: Snappy: Added AMD64 SSE 4.2 optimizations to matching, which + makes easy to compress material run faster. Typical speedup is around 25%. +- Feb 9, 2016: Added Snappy package fork. This version is 5-7% faster, much more + on hard to compress content. +- Jan 30, 2016: Optimize level 1 to 3 by not considering static dictionary or + storing uncompressed. ~4-5% speedup. +- Jan 16, 2016: Optimization on deflate level 1,2,3 compression. +- Jan 8 2016: Merge [CL 18317](https://go-review.googlesource.com/#/c/18317): + fix reading, writing of zip64 archives. +- Dec 8 2015: Make level 1 and -2 deterministic even if write size differs. +- Dec 8 2015: Split encoding functions, so hashing and matching can potentially + be inlined. 1-3% faster on AMD64. 5% faster on other platforms. +- Dec 8 2015: Fixed rare + [one byte out-of bounds read](https://github.com/klauspost/compress/issues/20). + Please update! +- Nov 23 2015: Optimization on token writer. ~2-4% faster. Contributed by + [@dsnet](https://github.com/dsnet). +- Nov 20 2015: Small optimization to bit writer on 64 bit systems. +- Nov 17 2015: Fixed out-of-bound errors if the underlying Writer returned an + error. See [#15](https://github.com/klauspost/compress/issues/15). +- Nov 12 2015: Added [io.WriterTo](https://golang.org/pkg/io/#WriterTo) support + to gzip/inflate. +- Nov 11 2015: Merged + [CL 16669](https://go-review.googlesource.com/#/c/16669/4): archive/zip: + enable overriding (de)compressors per file +- Oct 15 2015: Added skipping on uncompressible data. Random data speed up >5x. + +
+ +# deflate usage + +The packages are drop-in replacements for standard libraries. Simply replace the +import path to use them: + +| old import | new import | Documentation | +| ---------------- | ------------------------------------- | ----------------------------------------------------------------------- | +| `compress/gzip` | `github.com/klauspost/compress/gzip` | [gzip](https://pkg.go.dev/github.com/klauspost/compress/gzip?tab=doc) | +| `compress/zlib` | `github.com/klauspost/compress/zlib` | [zlib](https://pkg.go.dev/github.com/klauspost/compress/zlib?tab=doc) | +| `archive/zip` | `github.com/klauspost/compress/zip` | [zip](https://pkg.go.dev/github.com/klauspost/compress/zip?tab=doc) | +| `compress/flate` | `github.com/klauspost/compress/flate` | [flate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc) | + +- Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) + packages which can be used as a dropin replacement for + [gzip](https://godoc.org/github.com/klauspost/compress/gzip), + [zip](https://godoc.org/github.com/klauspost/compress/zip) and + [zlib](https://godoc.org/github.com/klauspost/compress/zlib). + +You may also be interested in [pgzip](https://github.com/klauspost/pgzip), which +is a drop in replacement for gzip, which support multithreaded compression on +big files and the optimized [crc32](https://github.com/klauspost/crc32) package +used by these packages. + +The packages contains the same as the standard library, so you can use the godoc +for that: [gzip](http://golang.org/pkg/compress/gzip/), +[zip](http://golang.org/pkg/archive/zip/), +[zlib](http://golang.org/pkg/compress/zlib/), +[flate](http://golang.org/pkg/compress/flate/). + +Currently there is only minor speedup on decompression (mostly CRC32 +calculation). + +Memory usage is typically 1MB for a Writer. stdlib is in the same range. If you +expect to have a lot of concurrently allocated Writers consider using the +stateless compress described below. + +For compression performance, see: +[this spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). + +To disable all assembly add `-tags=noasm`. This works across all packages. + +# Stateless compression + +This package offers stateless compression as a special option for gzip/deflate. +It will do compression but without maintaining any state between Write calls. + +This means there will be no memory kept between Write calls, but compression and +speed will be suboptimal. + +This is only relevant in cases where you expect to run many thousands of +compressors concurrently, but with very little activity. This is _not_ intended +for regular web servers serving individual requests. + +Because of this, the size of actual Write calls will affect output size. + +In gzip, specify level `-3` / `gzip.StatelessCompression` to enable. + +For direct deflate use, NewStatelessWriter and StatelessDeflate are available. +See +[documentation](https://godoc.org/github.com/klauspost/compress/flate#NewStatelessWriter) + +A `bufio.Writer` can of course be used to control write sizes. For example, to +use a 4KB buffer: + +```go + // replace 'ioutil.Discard' with your output. + gzw, err := gzip.NewWriterLevel(ioutil.Discard, gzip.StatelessCompression) + if err != nil { + return err + } + defer gzw.Close() + + w := bufio.NewWriterSize(gzw, 4096) + defer w.Flush() + + // Write to 'w' +``` + +This will only use up to 4KB in memory when the writer is idle. + +Compression is almost always worse than the fastest compression level and each +write will allocate (a little) memory. + +# Performance Update 2018 + +It has been a while since we have been looking at the speed of this package +compared to the standard library, so I thought I would re-do my tests and give +some overall recommendations based on the current state. All benchmarks have +been performed with Go 1.10 on my Desktop Intel(R) Core(TM) i7-2600 CPU +@3.40GHz. Since I last ran the tests, I have gotten more RAM, which means tests +with big files are no longer limited by my SSD. + +The raw results are in my +[updated spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). +Due to cgo changes and upstream updates i could not get the cgo version of gzip +to compile. Instead I included the [zstd](https://github.com/datadog/zstd) cgo +implementation. If I get cgo gzip to work again, I might replace the results in +the sheet. + +The columns to take note of are: _MB/s_ - the throughput. _Reduction_ - the data +size reduction in percent of the original. _Rel Speed_ relative speed compared +to the standard library at the same level. _Smaller_ - how many percent smaller +is the compressed output compared to stdlib. Negative means the output was +bigger. _Loss_ means the loss (or gain) in compression as a percentage +difference of the input. + +The `gzstd` (standard library gzip) and `gzkp` (this package gzip) only uses one +CPU core. [`pgzip`](https://github.com/klauspost/pgzip), +[`bgzf`](https://github.com/biogo/hts/tree/master/bgzf) uses all 4 cores. +[`zstd`](https://github.com/DataDog/zstd) uses one core, and is a beast (but not +Go, yet). + +## Overall differences. + +There appears to be a roughly 5-10% speed advantage over the standard library +when comparing at similar compression levels. + +The biggest difference you will see is the result of +[re-balancing](https://blog.klauspost.com/rebalancing-deflate-compression-levels/) +the compression levels. I wanted by library to give a smoother transition +between the compression levels than the standard library. + +This package attempts to provide a more smooth transition, where "1" is taking a +lot of shortcuts, "5" is the reasonable trade-off and "9" is the "give me the +best compression", and the values in between gives something reasonable in +between. The standard library has big differences in levels 1-4, but levels 5-9 +having no significant gains - often spending a lot more time than can be +justified by the achieved compression. + +There are links to all the test data in the +[spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing) +in the top left field on each tab. + +## Web Content + +This test set aims to emulate typical use in a web server. The test-set is 4GB +data in 53k files, and is a mixture of (mostly) HTML, JS, CSS. + +Since level 1 and 9 are close to being the same code, they are quite close. But +looking at the levels in-between the differences are quite big. + +Looking at level 6, this package is 88% faster, but will output about 6% more +data. For a web server, this means you can serve 88% more data, but have to pay +for 6% more bandwidth. You can draw your own conclusions on what would be the +most expensive for your case. + +## Object files + +This test is for typical data files stored on a server. In this case it is a +collection of Go precompiled objects. They are very compressible. + +The picture is similar to the web content, but with small differences since this +is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a +bit of compression. + +The standard library seems suboptimal on level 3 and 4 - offering both worse +compression and speed than level 6 & 7 of this package respectively. + +## Highly Compressible File + +This is a JSON file with very high redundancy. The reduction starts at 95% on +level 1, so in real life terms we are dealing with something like a highly +redundant stream of data, etc. + +It is definitely visible that we are dealing with specialized content here, so +the results are very scattered. This package does not do very well at levels +1-4, but picks up significantly at level 5 and levels 7 and 8 offering great +speed for the achieved compression. + +So if you know you content is extremely compressible you might want to go +slightly higher than the defaults. The standard library has a huge gap between +levels 3 and 4 in terms of speed (2.75x slowdown), so it offers little "middle +ground". + +## Medium-High Compressible + +This is a pretty common test corpus: +[enwik9](http://mattmahoney.net/dc/textdata.html). It contains the first 10^9 +bytes of the English Wikipedia dump on Mar. 3, 2006. This is a very good test of +typical text based compression and more data heavy streams. + +We see a similar picture here as in "Web Content". On equal levels some +compression is sacrificed for more speed. Level 5 seems to be the best trade-off +between speed and size, beating stdlib level 3 in both. + +## Medium Compressible + +I will combine two test sets, one +[10GB file set](http://mattmahoney.net/dc/10gb.html) and a VM disk image (~8GB). +Both contain different data types and represent a typical backup scenario. + +The most notable thing is how quickly the standard library drops to very low +compression speeds around level 5-6 without any big gains in compression. Since +this type of data is fairly common, this does not seem like good behavior. + +## Un-compressible Content + +This is mainly a test of how good the algorithms are at detecting +un-compressible input. The standard library only offers this feature with very +conservative settings at level 1. Obviously there is no reason for the +algorithms to try to compress input that cannot be compressed. The only downside +is that it might skip some compressible data on false detections. + +## Huffman only compression + +This compression library adds a special compression level, named `HuffmanOnly`, +which allows near linear time compression. This is done by completely disabling +matching of previous data, and only reduce the number of bits to represent each +character. + +This means that often used characters, like 'e' and ' ' (space) in text use the +fewest bits to represent, and rare characters like '¤' takes more bits to +represent. For more information see +[wikipedia](https://en.wikipedia.org/wiki/Huffman_coding) or this nice +[video](https://youtu.be/ZdooBTdW5bM). + +Since this type of compression has much less variance, the compression speed is +mostly unaffected by the input data, and is usually more than _180MB/s_ for a +single core. + +The downside is that the compression ratio is usually considerably worse than +even the fastest conventional compression. The compression ratio can never be +better than 8:1 (12.5%). + +The linear time compression can be used as a "better than nothing" mode, where +you cannot risk the encoder to slow down on some content. For comparison, the +size of the "Twain" text is _233460 bytes_ (+29% vs. level 1) and encode speed +is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 +times speedup. + +For more information see my blog post on +[Fast Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/). + +This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for +gzip. + +# Other packages + +Here are other packages of good quality and pure Go (no cgo wrappers or +autoconverted code): + +- [github.com/pierrec/lz4](https://github.com/pierrec/lz4) - strong + multithreaded LZ4 compression. +- [github.com/cosnicolaou/pbzip2](https://github.com/cosnicolaou/pbzip2) - + multithreaded bzip2 decompression. +- [github.com/dsnet/compress](https://github.com/dsnet/compress) - brotli + decompression, bzip2 writer. +- [github.com/ronanh/intcomp](https://github.com/ronanh/intcomp) - Integer + compression. +- [github.com/spenczar/fpc](https://github.com/spenczar/fpc) - Float + compression. +- [github.com/minio/zipindex](https://github.com/minio/zipindex) - External ZIP + directory index. +- [github.com/ybirader/pzip](https://github.com/ybirader/pzip) - Fast concurrent + zip archiver and extractor. + +# license + +This code is licensed under the same conditions as the original Go code. See +LICENSE file. diff --git a/examples/go/vendor/github.com/klauspost/compress/SECURITY.md b/examples/go/vendor/github.com/klauspost/compress/SECURITY.md index ca6685e..b1efa33 100644 --- a/examples/go/vendor/github.com/klauspost/compress/SECURITY.md +++ b/examples/go/vendor/github.com/klauspost/compress/SECURITY.md @@ -6,20 +6,38 @@ Security updates are applied only to the latest release. ## Vulnerability Definition -A security vulnerability is a bug that with certain input triggers a crash or an infinite loop. Most calls will have varying execution time and only in rare cases will slow operation be considered a security vulnerability. +A security vulnerability is a bug that with certain input triggers a crash or an +infinite loop. Most calls will have varying execution time and only in rare +cases will slow operation be considered a security vulnerability. -Corrupted output generally is not considered a security vulnerability, unless independent operations are able to affect each other. Note that not all functionality is re-entrant and safe to use concurrently. +Corrupted output generally is not considered a security vulnerability, unless +independent operations are able to affect each other. Note that not all +functionality is re-entrant and safe to use concurrently. -Out-of-memory crashes only applies if the en/decoder uses an abnormal amount of memory, with appropriate options applied, to limit maximum window size, concurrency, etc. However, if you are in doubt you are welcome to file a security issue. +Out-of-memory crashes only applies if the en/decoder uses an abnormal amount of +memory, with appropriate options applied, to limit maximum window size, +concurrency, etc. However, if you are in doubt you are welcome to file a +security issue. -It is assumed that all callers are trusted, meaning internal data exposed through reflection or inspection of returned data structures is not considered a vulnerability. +It is assumed that all callers are trusted, meaning internal data exposed +through reflection or inspection of returned data structures is not considered a +vulnerability. -Vulnerabilities resulting from compiler/assembler errors should be reported upstream. Depending on the severity this package may or may not implement a workaround. +Vulnerabilities resulting from compiler/assembler errors should be reported +upstream. Depending on the severity this package may or may not implement a +workaround. ## Reporting a Vulnerability -If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released. +If you have discovered a security vulnerability in this project, please report +it privately. **Do not disclose it as a public issue.** This gives us time to +work with you to fix the issue before public exposure, reducing the chance that +the exploit will be used before a patch is released. -Please disclose it at [security advisory](https://github.com/klauspost/compress/security/advisories/new). If possible please provide a minimal reproducer. If the issue only applies to a single platform, it would be helpful to provide access to that. +Please disclose it at +[security advisory](https://github.com/klauspost/compress/security/advisories/new). +If possible please provide a minimal reproducer. If the issue only applies to a +single platform, it would be helpful to provide access to that. -This project is maintained by a team of volunteers on a reasonable-effort basis. As such, vulnerabilities will be disclosed in a best effort base. +This project is maintained by a team of volunteers on a reasonable-effort basis. +As such, vulnerabilities will be disclosed in a best effort base. diff --git a/examples/go/vendor/github.com/klauspost/compress/fse/README.md b/examples/go/vendor/github.com/klauspost/compress/fse/README.md index ea7324d..ea88b4e 100644 --- a/examples/go/vendor/github.com/klauspost/compress/fse/README.md +++ b/examples/go/vendor/github.com/klauspost/compress/fse/README.md @@ -1,79 +1,99 @@ -# Finite State Entropy - -This package provides Finite State Entropy encoding and decoding. - -Finite State Entropy (also referenced as [tANS](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS)) -encoding provides a fast near-optimal symbol encoding/decoding -for byte blocks as implemented in [zstandard](https://github.com/facebook/zstd). - -This can be used for compressing input with a lot of similar input values to the smallest number of bytes. -This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders, -but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. - -* [Godoc documentation](https://godoc.org/github.com/klauspost/compress/fse) - -## News - - * Feb 2018: First implementation released. Consider this beta software for now. - -# Usage - -This package provides a low level interface that allows to compress single independent blocks. - -Each block is separate, and there is no built in integrity checks. -This means that the caller should keep track of block sizes and also do checksums if needed. - -Compressing a block is done via the [`Compress`](https://godoc.org/github.com/klauspost/compress/fse#Compress) function. -You must provide input and will receive the output and maybe an error. - -These error values can be returned: - -| Error | Description | -|---------------------|-----------------------------------------------------------------------------| -| `` | Everything ok, output is returned | -| `ErrIncompressible` | Returned when input is judged to be too hard to compress | -| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated | -| `(error)` | An internal error occurred. | - -As can be seen above there are errors that will be returned even under normal operation so it is important to handle these. - -To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/fse#Scratch) object -that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same -object can be used for both. - -Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this -you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output. - -Decompressing is done by calling the [`Decompress`](https://godoc.org/github.com/klauspost/compress/fse#Decompress) function. -You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back -your input was likely corrupted. - -It is important to note that a successful decoding does *not* mean your output matches your original input. -There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid. - -For more detailed usage, see examples in the [godoc documentation](https://godoc.org/github.com/klauspost/compress/fse#pkg-examples). - -# Performance - -A lot of factors are affecting speed. Block sizes and compressibility of the material are primary factors. -All compression functions are currently only running on the calling goroutine so only one core will be used per block. - -The compressor is significantly faster if symbols are kept as small as possible. The highest byte value of the input -is used to reduce some of the processing, so if all your input is above byte value 64 for instance, it may be -beneficial to transpose all your input values down by 64. - -With moderate block sizes around 64k speed are typically 200MB/s per core for compression and -around 300MB/s decompression speed. - -The same hardware typically does Huffman (deflate) encoding at 125MB/s and decompression at 100MB/s. - -# Plans - -At one point, more internals will be exposed to facilitate more "expert" usage of the components. - -A streaming interface is also likely to be implemented. Likely compatible with [FSE stream format](https://github.com/Cyan4973/FiniteStateEntropy/blob/dev/programs/fileio.c#L261). - -# Contributing - -Contributions are always welcome. Be aware that adding public functions will require good justification and breaking -changes will likely not be accepted. If in doubt open an issue before writing the PR. \ No newline at end of file +# Finite State Entropy + +This package provides Finite State Entropy encoding and decoding. + +Finite State Entropy (also referenced as +[tANS](https://en.wikipedia.org/wiki/Asymmetric_numeral_systems#tANS)) encoding +provides a fast near-optimal symbol encoding/decoding for byte blocks as +implemented in [zstandard](https://github.com/facebook/zstd). + +This can be used for compressing input with a lot of similar input values to the +smallest number of bytes. This does not perform any multi-byte +[dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ +coders, but it can be used as a secondary step to compressors (like Snappy) that +does not do entropy encoding. + +- [Godoc documentation](https://godoc.org/github.com/klauspost/compress/fse) + +## News + +- Feb 2018: First implementation released. Consider this beta software for now. + +# Usage + +This package provides a low level interface that allows to compress single +independent blocks. + +Each block is separate, and there is no built in integrity checks. This means +that the caller should keep track of block sizes and also do checksums if +needed. + +Compressing a block is done via the +[`Compress`](https://godoc.org/github.com/klauspost/compress/fse#Compress) +function. You must provide input and will receive the output and maybe an error. + +These error values can be returned: + +| Error | Description | +| ------------------- | --------------------------------------------------------------------------- | +| `` | Everything ok, output is returned | +| `ErrIncompressible` | Returned when input is judged to be too hard to compress | +| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated | +| `(error)` | An internal error occurred. | + +As can be seen above there are errors that will be returned even under normal +operation so it is important to handle these. + +To reduce allocations you can provide a +[`Scratch`](https://godoc.org/github.com/klauspost/compress/fse#Scratch) object +that can be re-used for successive calls. Both compression and decompression +accepts a `Scratch` object, and the same object can be used for both. + +Be aware, that when re-using a `Scratch` object that the _output_ buffer is also +re-used, so if you are still using this you must set the `Out` field in the +scratch to nil. The same buffer is used for compression and decompression +output. + +Decompressing is done by calling the +[`Decompress`](https://godoc.org/github.com/klauspost/compress/fse#Decompress) +function. You must provide the output from the compression stage, at exactly the +size you got back. If you receive an error back your input was likely corrupted. + +It is important to note that a successful decoding does _not_ mean your output +matches your original input. There are no integrity checks, so relying on errors +from the decompressor does not assure your data is valid. + +For more detailed usage, see examples in the +[godoc documentation](https://godoc.org/github.com/klauspost/compress/fse#pkg-examples). + +# Performance + +A lot of factors are affecting speed. Block sizes and compressibility of the +material are primary factors. +All compression functions are currently only running on the calling goroutine so +only one core will be used per block. + +The compressor is significantly faster if symbols are kept as small as possible. +The highest byte value of the input is used to reduce some of the processing, so +if all your input is above byte value 64 for instance, it may be beneficial to +transpose all your input values down by 64. + +With moderate block sizes around 64k speed are typically 200MB/s per core for +compression and around 300MB/s decompression speed. + +The same hardware typically does Huffman (deflate) encoding at 125MB/s and +decompression at 100MB/s. + +# Plans + +At one point, more internals will be exposed to facilitate more "expert" usage +of the components. + +A streaming interface is also likely to be implemented. Likely compatible with +[FSE stream format](https://github.com/Cyan4973/FiniteStateEntropy/blob/dev/programs/fileio.c#L261). + +# Contributing + +Contributions are always welcome. Be aware that adding public functions will +require good justification and breaking changes will likely not be accepted. If +in doubt open an issue before writing the PR. diff --git a/examples/go/vendor/github.com/klauspost/compress/huff0/README.md b/examples/go/vendor/github.com/klauspost/compress/huff0/README.md index 8b6e5c6..fec4c4a 100644 --- a/examples/go/vendor/github.com/klauspost/compress/huff0/README.md +++ b/examples/go/vendor/github.com/klauspost/compress/huff0/README.md @@ -1,89 +1,119 @@ -# Huff0 entropy compression - -This package provides Huff0 encoding and decoding as used in zstd. - -[Huff0](https://github.com/Cyan4973/FiniteStateEntropy#new-generation-entropy-coders), -a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU -(Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds. - -This can be used for compressing input with a lot of similar input values to the smallest number of bytes. -This does not perform any multi-byte [dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ coders, -but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding. - -* [Godoc documentation](https://godoc.org/github.com/klauspost/compress/huff0) - -## News - -This is used as part of the [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and decompression package. - -This ensures that most functionality is well tested. - -# Usage - -This package provides a low level interface that allows to compress single independent blocks. - -Each block is separate, and there is no built in integrity checks. -This means that the caller should keep track of block sizes and also do checksums if needed. - -Compressing a block is done via the [`Compress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress1X) and -[`Compress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress4X) functions. -You must provide input and will receive the output and maybe an error. - -These error values can be returned: - -| Error | Description | -|---------------------|-----------------------------------------------------------------------------| -| `` | Everything ok, output is returned | -| `ErrIncompressible` | Returned when input is judged to be too hard to compress | -| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated | -| `ErrTooBig` | Returned if the input block exceeds the maximum allowed size (128 Kib) | -| `(error)` | An internal error occurred. | - - -As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these. - -To reduce allocations you can provide a [`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object -that can be re-used for successive calls. Both compression and decompression accepts a `Scratch` object, and the same -object can be used for both. - -Be aware, that when re-using a `Scratch` object that the *output* buffer is also re-used, so if you are still using this -you must set the `Out` field in the scratch to nil. The same buffer is used for compression and decompression output. - -The `Scratch` object will retain state that allows to re-use previous tables for encoding and decoding. - -## Tables and re-use - -Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results. - -The Scratch object allows you to set a [`ReusePolicy`](https://godoc.org/github.com/klauspost/compress/huff0#ReusePolicy) -that controls this behaviour. See the documentation for details. This can be altered between each block. - -Do however note that this information is *not* stored in the output block and it is up to the users of the package to -record whether [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable) should be called, -based on the boolean reported back from the CompressXX call. - -If you want to store the table separate from the data, you can access them as `OutData` and `OutTable` on the -[`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) object. - -## Decompressing - -The first part of decoding is to initialize the decoding table through [`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable). -This will initialize the decoding tables. -You can supply the complete block to `ReadTable` and it will return the data part of the block -which can be given to the decompressor. - -Decompressing is done by calling the [`Decompress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress1X) -or [`Decompress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress4X) function. - -For concurrently decompressing content with a fixed table a stateless [`Decoder`](https://godoc.org/github.com/klauspost/compress/huff0#Decoder) can be requested which will remain correct as long as the scratch is unchanged. The capacity of the provided slice indicates the expected output size. - -You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back -your input was likely corrupted. - -It is important to note that a successful decoding does *not* mean your output matches your original input. -There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid. - -# Contributing - -Contributions are always welcome. Be aware that adding public functions will require good justification and breaking -changes will likely not be accepted. If in doubt open an issue before writing the PR. +# Huff0 entropy compression + +This package provides Huff0 encoding and decoding as used in zstd. + +[Huff0](https://github.com/Cyan4973/FiniteStateEntropy#new-generation-entropy-coders), +a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations +on multiple ALU (Arithmetic Logic Unit), achieving extremely fast compression +and decompression speeds. + +This can be used for compressing input with a lot of similar input values to the +smallest number of bytes. This does not perform any multi-byte +[dictionary coding](https://en.wikipedia.org/wiki/Dictionary_coder) as LZ +coders, but it can be used as a secondary step to compressors (like Snappy) that +does not do entropy encoding. + +- [Godoc documentation](https://godoc.org/github.com/klauspost/compress/huff0) + +## News + +This is used as part of the +[zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) +compression and decompression package. + +This ensures that most functionality is well tested. + +# Usage + +This package provides a low level interface that allows to compress single +independent blocks. + +Each block is separate, and there is no built in integrity checks. This means +that the caller should keep track of block sizes and also do checksums if +needed. + +Compressing a block is done via the +[`Compress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress1X) +and +[`Compress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Compress4X) +functions. You must provide input and will receive the output and maybe an +error. + +These error values can be returned: + +| Error | Description | +| ------------------- | --------------------------------------------------------------------------- | +| `` | Everything ok, output is returned | +| `ErrIncompressible` | Returned when input is judged to be too hard to compress | +| `ErrUseRLE` | Returned from the compressor when the input is a single byte value repeated | +| `ErrTooBig` | Returned if the input block exceeds the maximum allowed size (128 Kib) | +| `(error)` | An internal error occurred. | + +As can be seen above some of there are errors that will be returned even under +normal operation so it is important to handle these. + +To reduce allocations you can provide a +[`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) +object that can be re-used for successive calls. Both compression and +decompression accepts a `Scratch` object, and the same object can be used for +both. + +Be aware, that when re-using a `Scratch` object that the _output_ buffer is also +re-used, so if you are still using this you must set the `Out` field in the +scratch to nil. The same buffer is used for compression and decompression +output. + +The `Scratch` object will retain state that allows to re-use previous tables for +encoding and decoding. + +## Tables and re-use + +Huff0 allows for reusing tables from the previous block to save space if that is +expected to give better/faster results. + +The Scratch object allows you to set a +[`ReusePolicy`](https://godoc.org/github.com/klauspost/compress/huff0#ReusePolicy) +that controls this behaviour. See the documentation for details. This can be +altered between each block. + +Do however note that this information is _not_ stored in the output block and it +is up to the users of the package to record whether +[`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable) +should be called, based on the boolean reported back from the CompressXX call. + +If you want to store the table separate from the data, you can access them as +`OutData` and `OutTable` on the +[`Scratch`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch) +object. + +## Decompressing + +The first part of decoding is to initialize the decoding table through +[`ReadTable`](https://godoc.org/github.com/klauspost/compress/huff0#ReadTable). +This will initialize the decoding tables. You can supply the complete block to +`ReadTable` and it will return the data part of the block which can be given to +the decompressor. + +Decompressing is done by calling the +[`Decompress1X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress1X) +or +[`Decompress4X`](https://godoc.org/github.com/klauspost/compress/huff0#Scratch.Decompress4X) +function. + +For concurrently decompressing content with a fixed table a stateless +[`Decoder`](https://godoc.org/github.com/klauspost/compress/huff0#Decoder) can +be requested which will remain correct as long as the scratch is unchanged. The +capacity of the provided slice indicates the expected output size. + +You must provide the output from the compression stage, at exactly the size you +got back. If you receive an error back your input was likely corrupted. + +It is important to note that a successful decoding does _not_ mean your output +matches your original input. There are no integrity checks, so relying on errors +from the decompressor does not assure your data is valid. + +# Contributing + +Contributions are always welcome. Be aware that adding public functions will +require good justification and breaking changes will likely not be accepted. If +in doubt open an issue before writing the PR. diff --git a/examples/go/vendor/github.com/klauspost/compress/zstd/README.md b/examples/go/vendor/github.com/klauspost/compress/zstd/README.md index 92e2347..e7a872f 100644 --- a/examples/go/vendor/github.com/klauspost/compress/zstd/README.md +++ b/examples/go/vendor/github.com/klauspost/compress/zstd/README.md @@ -1,53 +1,61 @@ -# zstd +# zstd -[Zstandard](https://facebook.github.io/zstd/) is a real-time compression algorithm, providing high compression ratios. -It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. -A high performance compression algorithm is implemented. For now focused on speed. +[Zstandard](https://facebook.github.io/zstd/) is a real-time compression +algorithm, providing high compression ratios. It offers a very wide range of +compression / speed trade-off, while being backed by a very fast decoder. A high +performance compression algorithm is implemented. For now focused on speed. -This package provides [compression](#Compressor) to and [decompression](#Decompressor) of Zstandard content. +This package provides [compression](#Compressor) to and +[decompression](#Decompressor) of Zstandard content. -This package is pure Go and without use of "unsafe". +This package is pure Go and without use of "unsafe". -The `zstd` package is provided as open source software using a Go standard license. +The `zstd` package is provided as open source software using a Go standard +license. -Currently the package is heavily optimized for 64 bit processors and will be significantly slower on 32 bit processors. +Currently the package is heavily optimized for 64 bit processors and will be +significantly slower on 32 bit processors. -For seekable zstd streams, see [this excellent package](https://github.com/SaveTheRbtz/zstd-seekable-format-go). +For seekable zstd streams, see +[this excellent package](https://github.com/SaveTheRbtz/zstd-seekable-format-go). ## Installation -Install using `go get -u github.com/klauspost/compress`. The package is located in `github.com/klauspost/compress/zstd`. +Install using `go get -u github.com/klauspost/compress`. The package is located +in `github.com/klauspost/compress/zstd`. [![Go Reference](https://pkg.go.dev/badge/github.com/klauspost/compress/zstd.svg)](https://pkg.go.dev/github.com/klauspost/compress/zstd) ## Compressor -### Status: +### Status: -STABLE - there may always be subtle bugs, a wide variety of content has been tested and the library is actively -used by several projects. This library is being [fuzz-tested](https://github.com/klauspost/compress-fuzz) for all updates. +STABLE - there may always be subtle bugs, a wide variety of content has been +tested and the library is actively used by several projects. This library is +being [fuzz-tested](https://github.com/klauspost/compress-fuzz) for all updates. -There may still be specific combinations of data types/size/settings that could lead to edge cases, -so as always, testing is recommended. +There may still be specific combinations of data types/size/settings that could +lead to edge cases, so as always, testing is recommended. -For now, a high speed (fastest) and medium-fast (default) compressor has been implemented. +For now, a high speed (fastest) and medium-fast (default) compressor has been +implemented. -* The "Fastest" compression ratio is roughly equivalent to zstd level 1. -* The "Default" compression ratio is roughly equivalent to zstd level 3 (default). -* The "Better" compression ratio is roughly equivalent to zstd level 7. -* The "Best" compression ratio is roughly equivalent to zstd level 11. +- The "Fastest" compression ratio is roughly equivalent to zstd level 1. +- The "Default" compression ratio is roughly equivalent to zstd level 3 + (default). +- The "Better" compression ratio is roughly equivalent to zstd level 7. +- The "Best" compression ratio is roughly equivalent to zstd level 11. -In terms of speed, it is typically 2x as fast as the stdlib deflate/gzip in its fastest mode. -The compression ratio compared to stdlib is around level 3, but usually 3x as fast. +In terms of speed, it is typically 2x as fast as the stdlib deflate/gzip in its +fastest mode. The compression ratio compared to stdlib is around level 3, but +usually 3x as fast. - ### Usage -An Encoder can be used for either compressing a stream via the -`io.WriteCloser` interface supported by the Encoder or as multiple independent -tasks via the `EncodeAll` function. -Smaller encodes are encouraged to use the EncodeAll function. -Use `NewWriter` to create a new instance that can be used for both. +An Encoder can be used for either compressing a stream via the `io.WriteCloser` +interface supported by the Encoder or as multiple independent tasks via the +`EncodeAll` function. Smaller encodes are encouraged to use the EncodeAll +function. Use `NewWriter` to create a new instance that can be used for both. To create a writer with default options, do like this: @@ -67,57 +75,70 @@ func Compress(in io.Reader, out io.Writer) error { } ``` -Now you can encode by writing data to `enc`. The output will be finished writing when `Close()` is called. -Even if your encode fails, you should still call `Close()` to release any resources that may be held up. +Now you can encode by writing data to `enc`. The output will be finished writing +when `Close()` is called. Even if your encode fails, you should still call +`Close()` to release any resources that may be held up. -The above is fine for big encodes. However, whenever possible try to *reuse* the writer. +The above is fine for big encodes. However, whenever possible try to _reuse_ the +writer. -To reuse the encoder, you can use the `Reset(io.Writer)` function to change to another output. -This will allow the encoder to reuse all resources and avoid wasteful allocations. +To reuse the encoder, you can use the `Reset(io.Writer)` function to change to +another output. This will allow the encoder to reuse all resources and avoid +wasteful allocations. -Currently stream encoding has 'light' concurrency, meaning up to 2 goroutines can be working on part -of a stream. This is independent of the `WithEncoderConcurrency(n)`, but that is likely to change -in the future. So if you want to limit concurrency for future updates, specify the concurrency -you would like. +Currently stream encoding has 'light' concurrency, meaning up to 2 goroutines +can be working on part of a stream. This is independent of the +`WithEncoderConcurrency(n)`, but that is likely to change in the future. So if +you want to limit concurrency for future updates, specify the concurrency you +would like. -If you would like stream encoding to be done without spawning async goroutines, use `WithEncoderConcurrency(1)` -which will compress input as each block is completed, blocking on writes until each has completed. +If you would like stream encoding to be done without spawning async goroutines, +use `WithEncoderConcurrency(1)` which will compress input as each block is +completed, blocking on writes until each has completed. -You can specify your desired compression level using `WithEncoderLevel()` option. Currently only pre-defined -compression settings can be specified. +You can specify your desired compression level using `WithEncoderLevel()` +option. Currently only pre-defined compression settings can be specified. #### Future Compatibility Guarantees -This will be an evolving project. When using this package it is important to note that both the compression efficiency and speed may change. +This will be an evolving project. When using this package it is important to +note that both the compression efficiency and speed may change. -The goal will be to keep the default efficiency at the default zstd (level 3). -However the encoding should never be assumed to remain the same, -and you should not use hashes of compressed output for similarity checks. +The goal will be to keep the default efficiency at the default zstd (level 3). +However the encoding should never be assumed to remain the same, and you should +not use hashes of compressed output for similarity checks. -The Encoder can be assumed to produce the same output from the exact same code version. -However, the may be modes in the future that break this, -although they will not be enabled without an explicit option. +The Encoder can be assumed to produce the same output from the exact same code +version. However, the may be modes in the future that break this, although they +will not be enabled without an explicit option. -This encoder is not designed to (and will probably never) output the exact same bitstream as the reference encoder. +This encoder is not designed to (and will probably never) output the exact same +bitstream as the reference encoder. -Also note, that the cgo decompressor currently does not [report all errors on invalid input](https://github.com/DataDog/zstd/issues/59), -[omits error checks](https://github.com/DataDog/zstd/issues/61), [ignores checksums](https://github.com/DataDog/zstd/issues/43) -and seems to ignore concatenated streams, even though [it is part of the spec](https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frames). +Also note, that the cgo decompressor currently does not +[report all errors on invalid input](https://github.com/DataDog/zstd/issues/59), +[omits error checks](https://github.com/DataDog/zstd/issues/61), +[ignores checksums](https://github.com/DataDog/zstd/issues/43) and seems to +ignore concatenated streams, even though +[it is part of the spec](https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md#frames). #### Blocks -For compressing small blocks, the returned encoder has a function called `EncodeAll(src, dst []byte) []byte`. +For compressing small blocks, the returned encoder has a function called +`EncodeAll(src, dst []byte) []byte`. -`EncodeAll` will encode all input in src and append it to dst. -This function can be called concurrently. -Each call will only run on a same goroutine as the caller. +`EncodeAll` will encode all input in src and append it to dst. This function can +be called concurrently. Each call will only run on a same goroutine as the +caller. -Encoded blocks can be concatenated and the result will be the combined input stream. -Data compressed with EncodeAll can be decoded with the Decoder, using either a stream or `DecodeAll`. +Encoded blocks can be concatenated and the result will be the combined input +stream. Data compressed with EncodeAll can be decoded with the Decoder, using +either a stream or `DecodeAll`. -Especially when encoding blocks you should take special care to reuse the encoder. -This will effectively make it run without allocations after a warmup period. -To make it run completely without allocations, supply a destination buffer with space for all content. +Especially when encoding blocks you should take special care to reuse the +encoder. This will effectively make it run without allocations after a warmup +period. To make it run completely without allocations, supply a destination +buffer with space for all content. ```Go import "github.com/klauspost/compress/zstd" @@ -126,28 +147,31 @@ import "github.com/klauspost/compress/zstd" // For this operation type we supply a nil Reader. var encoder, _ = zstd.NewWriter(nil) -// Compress a buffer. +// Compress a buffer. // If you have a destination buffer, the allocation in the call can also be eliminated. func Compress(src []byte) []byte { return encoder.EncodeAll(src, make([]byte, 0, len(src))) -} +} ``` -You can control the maximum number of concurrent encodes using the `WithEncoderConcurrency(n)` -option when creating the writer. +You can control the maximum number of concurrent encodes using the +`WithEncoderConcurrency(n)` option when creating the writer. -Using the Encoder for both a stream and individual blocks concurrently is safe. +Using the Encoder for both a stream and individual blocks concurrently is safe. ### Performance -I have collected some speed examples to compare speed and compression against other compressors. +I have collected some speed examples to compare speed and compression against +other compressors. -* `file` is the input file. -* `out` is the compressor used. `zskp` is this package. `zstd` is the Datadog cgo library. `gzstd/gzkp` is gzip standard and this library. -* `level` is the compression level used. For `zskp` level 1 is "fastest", level 2 is "default"; 3 is "better", 4 is "best". -* `insize`/`outsize` is the input/output size. -* `millis` is the number of milliseconds used for compression. -* `mb/s` is megabytes (2^20 bytes) per second. +- `file` is the input file. +- `out` is the compressor used. `zskp` is this package. `zstd` is the Datadog + cgo library. `gzstd/gzkp` is gzip standard and this library. +- `level` is the compression level used. For `zskp` level 1 is "fastest", level + 2 is "default"; 3 is "better", 4 is "best". +- `insize`/`outsize` is the input/output size. +- `millis` is the number of milliseconds used for compression. +- `mb/s` is megabytes (2^20 bytes) per second. ``` Silesia Corpus: @@ -259,17 +283,20 @@ nyc-taxi-data-10M.csv gzkp 1 3325605752 922273214 13929 227.68 ## Decompressor -Status: STABLE - there may still be subtle bugs, but a wide variety of content has been tested. +Status: STABLE - there may still be subtle bugs, but a wide variety of content +has been tested. + +This library is being continuously +[fuzz-tested](https://github.com/klauspost/compress-fuzz), kindly supplied by +[fuzzit.dev](https://fuzzit.dev/). The main purpose of the fuzz testing is to +ensure that it is not possible to crash the decoder, or run it past its limits +with ANY input provided. -This library is being continuously [fuzz-tested](https://github.com/klauspost/compress-fuzz), -kindly supplied by [fuzzit.dev](https://fuzzit.dev/). -The main purpose of the fuzz testing is to ensure that it is not possible to crash the decoder, -or run it past its limits with ANY input provided. - ### Usage -The package has been designed for two main usages, big streams of data and smaller in-memory buffers. -There are two main usages of the package for these. Both of them are accessed by creating a `Decoder`. +The package has been designed for two main usages, big streams of data and +smaller in-memory buffers. There are two main usages of the package for these. +Both of them are accessed by creating a `Decoder`. For streaming use a simple setup could look like this: @@ -282,20 +309,21 @@ func Decompress(in io.Reader, out io.Writer) error { return err } defer d.Close() - + // Copy content... _, err = io.Copy(out, d) return err } ``` -It is important to use the "Close" function when you no longer need the Reader to stop running goroutines, -when running with default settings. -Goroutines will exit once an error has been returned, including `io.EOF` at the end of a stream. +It is important to use the "Close" function when you no longer need the Reader +to stop running goroutines, when running with default settings. Goroutines will +exit once an error has been returned, including `io.EOF` at the end of a stream. -Streams are decoded concurrently in 4 asynchronous stages to give the best possible throughput. -However, if you prefer synchronous decompression, use `WithDecoderConcurrency(1)` which will decompress data -as it is being requested only. +Streams are decoded concurrently in 4 asynchronous stages to give the best +possible throughput. However, if you prefer synchronous decompression, use +`WithDecoderConcurrency(1)` which will decompress data as it is being requested +only. For decoding buffers, it could look something like this: @@ -310,85 +338,98 @@ var decoder, _ = zstd.NewReader(nil, zstd.WithDecoderConcurrency(0)) // so it will be allocated by the decoder. func Decompress(src []byte) ([]byte, error) { return decoder.DecodeAll(src, nil) -} +} ``` -Both of these cases should provide the functionality needed. -The decoder can be used for *concurrent* decompression of multiple buffers. -By default 4 decompressors will be created. +Both of these cases should provide the functionality needed. The decoder can be +used for _concurrent_ decompression of multiple buffers. By default 4 +decompressors will be created. -It will only allow a certain number of concurrent operations to run. -To tweak that yourself use the `WithDecoderConcurrency(n)` option when creating the decoder. -It is possible to use `WithDecoderConcurrency(0)` to create GOMAXPROCS decoders. +It will only allow a certain number of concurrent operations to run. To tweak +that yourself use the `WithDecoderConcurrency(n)` option when creating the +decoder. It is possible to use `WithDecoderConcurrency(0)` to create GOMAXPROCS +decoders. ### Dictionaries -Data compressed with [dictionaries](https://github.com/facebook/zstd#the-case-for-small-data-compression) can be decompressed. +Data compressed with +[dictionaries](https://github.com/facebook/zstd#the-case-for-small-data-compression) +can be decompressed. -Dictionaries are added individually to Decoders. -Dictionaries are generated by the `zstd --train` command and contains an initial state for the decoder. -To add a dictionary use the `WithDecoderDicts(dicts ...[]byte)` option with the dictionary data. -Several dictionaries can be added at once. +Dictionaries are added individually to Decoders. Dictionaries are generated by +the `zstd --train` command and contains an initial state for the decoder. To add +a dictionary use the `WithDecoderDicts(dicts ...[]byte)` option with the +dictionary data. Several dictionaries can be added at once. -The dictionary will be used automatically for the data that specifies them. -A re-used Decoder will still contain the dictionaries registered. +The dictionary will be used automatically for the data that specifies them. A +re-used Decoder will still contain the dictionaries registered. -When registering multiple dictionaries with the same ID, the last one will be used. +When registering multiple dictionaries with the same ID, the last one will be +used. It is possible to use dictionaries when compressing data. -To enable a dictionary use `WithEncoderDict(dict []byte)`. Here only one dictionary will be used -and it will likely be used even if it doesn't improve compression. +To enable a dictionary use `WithEncoderDict(dict []byte)`. Here only one +dictionary will be used and it will likely be used even if it doesn't improve +compression. The used dictionary must be used to decompress the content. -For any real gains, the dictionary should be built with similar data. -If an unsuitable dictionary is used the output may be slightly larger than using no dictionary. -Use the [zstd commandline tool](https://github.com/facebook/zstd/releases) to build a dictionary from sample data. -For information see [zstd dictionary information](https://github.com/facebook/zstd#the-case-for-small-data-compression). +For any real gains, the dictionary should be built with similar data. If an +unsuitable dictionary is used the output may be slightly larger than using no +dictionary. Use the +[zstd commandline tool](https://github.com/facebook/zstd/releases) to build a +dictionary from sample data. For information see +[zstd dictionary information](https://github.com/facebook/zstd#the-case-for-small-data-compression). -For now there is a fixed startup performance penalty for compressing content with dictionaries. -This will likely be improved over time. Just be aware to test performance when implementing. +For now there is a fixed startup performance penalty for compressing content +with dictionaries. This will likely be improved over time. Just be aware to test +performance when implementing. ### Allocation-less operation -The decoder has been designed to operate without allocations after a warmup. +The decoder has been designed to operate without allocations after a warmup. -This means that you should *store* the decoder for best performance. -To re-use a stream decoder, use the `Reset(r io.Reader) error` to switch to another stream. +This means that you should _store_ the decoder for best performance. To re-use a +stream decoder, use the `Reset(r io.Reader) error` to switch to another stream. A decoder can safely be re-used even if the previous stream failed. To release the resources, you must call the `Close()` function on a decoder. -After this it can *no longer be reused*, but all running goroutines will be stopped. -So you *must* use this if you will no longer need the Reader. +After this it can _no longer be reused_, but all running goroutines will be +stopped. So you _must_ use this if you will no longer need the Reader. -For decompressing smaller buffers a single decoder can be used. -When decoding buffers, you can supply a destination slice with length 0 and your expected capacity. -In this case no unneeded allocations should be made. +For decompressing smaller buffers a single decoder can be used. When decoding +buffers, you can supply a destination slice with length 0 and your expected +capacity. In this case no unneeded allocations should be made. ### Concurrency -The buffer decoder does everything on the same goroutine and does nothing concurrently. -It can however decode several buffers concurrently. Use `WithDecoderConcurrency(n)` to limit that. +The buffer decoder does everything on the same goroutine and does nothing +concurrently. It can however decode several buffers concurrently. Use +`WithDecoderConcurrency(n)` to limit that. The stream decoder will create goroutines that: -1) Reads input and splits the input into blocks. -2) Decompression of literals. -3) Decompression of sequences. -4) Reconstruction of output stream. +1. Reads input and splits the input into blocks. +2. Decompression of literals. +3. Decompression of sequences. +4. Reconstruction of output stream. + +So effectively this also means the decoder will "read ahead" and prepare data to +always be available for output. -So effectively this also means the decoder will "read ahead" and prepare data to always be available for output. +The concurrency level will, for streams, determine how many blocks ahead the +compression will start. -The concurrency level will, for streams, determine how many blocks ahead the compression will start. +Since "blocks" are quite dependent on the output of the previous block stream +decoding will only have limited concurrency. -Since "blocks" are quite dependent on the output of the previous block stream decoding will only have limited concurrency. +In practice this means that concurrency is often limited to utilizing about 3 +cores effectively. -In practice this means that concurrency is often limited to utilizing about 3 cores effectively. - ### Benchmarks -The first two are streaming decodes and the last are smaller inputs. +The first two are streaming decodes and the last are smaller inputs. Running on AMD Ryzen 9 3950X 16-Core Processor. AMD64 assembly used. @@ -416,26 +457,32 @@ This reflects the performance around May 2022, but this may be out of date. ## Zstd inside ZIP files -It is possible to use zstandard to compress individual files inside zip archives. -While this isn't widely supported it can be useful for internal files. +It is possible to use zstandard to compress individual files inside zip +archives. While this isn't widely supported it can be useful for internal files. -To support the compression and decompression of these files you must register a compressor and decompressor. +To support the compression and decompression of these files you must register a +compressor and decompressor. -It is highly recommended registering the (de)compressors on individual zip Reader/Writer and NOT -use the global registration functions. The main reason for this is that 2 registrations from -different packages will result in a panic. +It is highly recommended registering the (de)compressors on individual zip +Reader/Writer and NOT use the global registration functions. The main reason for +this is that 2 registrations from different packages will result in a panic. -It is a good idea to only have a single compressor and decompressor, since they can be used for multiple zip -files concurrently, and using a single instance will allow reusing some resources. +It is a good idea to only have a single compressor and decompressor, since they +can be used for multiple zip files concurrently, and using a single instance +will allow reusing some resources. -See [this example](https://pkg.go.dev/github.com/klauspost/compress/zstd#example-ZipCompressor) for -how to compress and decompress files inside zip archives. +See +[this example](https://pkg.go.dev/github.com/klauspost/compress/zstd#example-ZipCompressor) +for how to compress and decompress files inside zip archives. # Contributions -Contributions are always welcome. -For new features/fixes, remember to add tests and for performance enhancements include benchmarks. +Contributions are always welcome. For new features/fixes, remember to add tests +and for performance enhancements include benchmarks. -For general feedback and experience reports, feel free to open an issue or write me on [Twitter](https://twitter.com/sh0dan). +For general feedback and experience reports, feel free to open an issue or write +me on [Twitter](https://twitter.com/sh0dan). -This package includes the excellent [`github.com/cespare/xxhash`](https://github.com/cespare/xxhash) package Copyright (c) 2016 Caleb Spare. +This package includes the excellent +[`github.com/cespare/xxhash`](https://github.com/cespare/xxhash) package +Copyright (c) 2016 Caleb Spare. diff --git a/examples/go/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md b/examples/go/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md index 777290d..166e7d7 100644 --- a/examples/go/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md +++ b/examples/go/vendor/github.com/klauspost/compress/zstd/internal/xxhash/README.md @@ -1,6 +1,7 @@ # xxhash -VENDORED: Go to [github.com/cespare/xxhash](https://github.com/cespare/xxhash) for original package. +VENDORED: Go to [github.com/cespare/xxhash](https://github.com/cespare/xxhash) +for original package. xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a high-quality hashing algorithm that is much faster than anything in the Go @@ -35,9 +36,9 @@ This package is in a module and the latest code is in version 2 of the module. You need a version of Go with at least "minimal module compatibility" to use github.com/cespare/xxhash/v2: -* 1.9.7+ for Go 1.9 -* 1.10.3+ for Go 1.10 -* Go 1.11 or later +- 1.9.7+ for Go 1.9 +- 1.10.3+ for Go 1.10 +- Go 1.11 or later I recommend using the latest release of Go. @@ -48,9 +49,9 @@ implementations of Sum64. | input size | purego | asm | | ---------- | --------- | --------- | -| 4 B | 1.3 GB/s | 1.2 GB/s | -| 16 B | 2.9 GB/s | 3.5 GB/s | -| 100 B | 6.9 GB/s | 8.1 GB/s | +| 4 B | 1.3 GB/s | 1.2 GB/s | +| 16 B | 2.9 GB/s | 3.5 GB/s | +| 100 B | 6.9 GB/s | 8.1 GB/s | | 4 KB | 11.7 GB/s | 16.7 GB/s | | 10 MB | 12.0 GB/s | 17.3 GB/s | diff --git a/examples/go/vendor/github.com/pierrec/lz4/README.md b/examples/go/vendor/github.com/pierrec/lz4/README.md index 4ee388e..c2d4dfb 100644 --- a/examples/go/vendor/github.com/pierrec/lz4/README.md +++ b/examples/go/vendor/github.com/pierrec/lz4/README.md @@ -7,8 +7,10 @@ ## Overview -This package provides a streaming interface to [LZ4 data streams](http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html) as well as low level compress and uncompress functions for LZ4 data blocks. -The implementation is based on the reference C [one](https://github.com/lz4/lz4). +This package provides a streaming interface to +[LZ4 data streams](http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html) +as well as low level compress and uncompress functions for LZ4 data blocks. The +implementation is based on the reference C [one](https://github.com/lz4/lz4). ## Install @@ -48,7 +50,6 @@ uncompress [arguments] [ ...] ``` - ## Example ``` @@ -83,8 +84,11 @@ Contributions are very welcome for bug fixing, performance improvements...! ## Contributors -Thanks to all [contributors](https://github.com/pierrec/lz4/graphs/contributors) so far! +Thanks to all [contributors](https://github.com/pierrec/lz4/graphs/contributors) +so far! -Special thanks to [@Zariel](https://github.com/Zariel) for his asm implementation of the decoder. +Special thanks to [@Zariel](https://github.com/Zariel) for his asm +implementation of the decoder. -Special thanks to [@klauspost](https://github.com/klauspost) for his work on optimizing the code. +Special thanks to [@klauspost](https://github.com/klauspost) for his work on +optimizing the code. diff --git a/examples/go/vendor/github.com/pkg/errors/README.md b/examples/go/vendor/github.com/pkg/errors/README.md index 54dfdcb..30f485d 100644 --- a/examples/go/vendor/github.com/pkg/errors/README.md +++ b/examples/go/vendor/github.com/pkg/errors/README.md @@ -5,31 +5,47 @@ Package errors provides simple error handling primitives. `go get github.com/pkg/errors` The traditional error handling idiom in Go is roughly akin to + ```go if err != nil { return err } ``` -which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. + +which applied recursively up the call stack results in error reports without +context or debugging information. The errors package allows programmers to add +context to the failure path in their code in a way that does not destroy the +original value of the error. ## Adding context to an error -The errors.Wrap function returns a new error that adds context to the original error. For example +The errors.Wrap function returns a new error that adds context to the original +error. For example + ```go _, err := ioutil.ReadAll(r) if err != nil { return errors.Wrap(err, "read failed") } ``` + ## Retrieving the cause of an error -Using `errors.Wrap` constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by `errors.Cause`. +Using `errors.Wrap` constructs a stack of errors, adding context to the +preceding error. Depending on the nature of the error it may be necessary to +reverse the operation of errors.Wrap to retrieve the original error for +inspection. Any error value which implements this interface can be inspected by +`errors.Cause`. + ```go type causer interface { Cause() error } ``` -`errors.Cause` will recursively retrieve the topmost error which does not implement `causer`, which is assumed to be the original cause. For example: + +`errors.Cause` will recursively retrieve the topmost error which does not +implement `causer`, which is assumed to be the original cause. For example: + ```go switch err := errors.Cause(err).(type) { case *MyError: @@ -43,14 +59,20 @@ default: ## Roadmap -With the upcoming [Go2 error proposals](https://go.googlesource.com/proposal/+/master/design/go2draft.md) this package is moving into maintenance mode. The roadmap for a 1.0 release is as follows: +With the upcoming +[Go2 error proposals](https://go.googlesource.com/proposal/+/master/design/go2draft.md) +this package is moving into maintenance mode. The roadmap for a 1.0 release is +as follows: -- 0.9. Remove pre Go 1.9 and Go 1.10 support, address outstanding pull requests (if possible) +- 0.9. Remove pre Go 1.9 and Go 1.10 support, address outstanding pull requests + (if possible) - 1.0. Final release. ## Contributing -Because of the Go2 errors changes, this package is not accepting proposals for new functionality. With that said, we welcome pull requests, bug fixes and issue reports. +Because of the Go2 errors changes, this package is not accepting proposals for +new functionality. With that said, we welcome pull requests, bug fixes and issue +reports. Before sending a PR, please discuss your change by raising an issue. diff --git a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CHANGELOG.md b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CHANGELOG.md index fd03c1f..f4b484a 100644 --- a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CHANGELOG.md +++ b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CHANGELOG.md @@ -6,33 +6,61 @@ **Implemented enhancements:** -- Undeprecate non-context publish functions [\#259](https://github.com/rabbitmq/amqp091-go/pull/259) ([Zerpet](https://github.com/Zerpet)) -- Update Go directive [\#257](https://github.com/rabbitmq/amqp091-go/pull/257) ([Zerpet](https://github.com/Zerpet)) +- Undeprecate non-context publish functions + [\#259](https://github.com/rabbitmq/amqp091-go/pull/259) + ([Zerpet](https://github.com/Zerpet)) +- Update Go directive [\#257](https://github.com/rabbitmq/amqp091-go/pull/257) + ([Zerpet](https://github.com/Zerpet)) **Fixed bugs:** -- republishing on reconnect bug in the example [\#249](https://github.com/rabbitmq/amqp091-go/issues/249) -- Channel Notify Close not receive event when connection is closed by RMQ server. [\#241](https://github.com/rabbitmq/amqp091-go/issues/241) -- Inconsistent documentation [\#231](https://github.com/rabbitmq/amqp091-go/issues/231) -- Data race in the client example [\#72](https://github.com/rabbitmq/amqp091-go/issues/72) -- Fix string function of URI [\#258](https://github.com/rabbitmq/amqp091-go/pull/258) ([Zerpet](https://github.com/Zerpet)) +- republishing on reconnect bug in the example + [\#249](https://github.com/rabbitmq/amqp091-go/issues/249) +- Channel Notify Close not receive event when connection is closed by RMQ + server. [\#241](https://github.com/rabbitmq/amqp091-go/issues/241) +- Inconsistent documentation + [\#231](https://github.com/rabbitmq/amqp091-go/issues/231) +- Data race in the client example + [\#72](https://github.com/rabbitmq/amqp091-go/issues/72) +- Fix string function of URI + [\#258](https://github.com/rabbitmq/amqp091-go/pull/258) + ([Zerpet](https://github.com/Zerpet)) **Closed issues:** -- Documentation needed \(`PublishWithContext` does not use context\) [\#195](https://github.com/rabbitmq/amqp091-go/issues/195) -- concurrent dispatch data race [\#226](https://github.com/rabbitmq/amqp091-go/issues/226) +- Documentation needed \(`PublishWithContext` does not use context\) + [\#195](https://github.com/rabbitmq/amqp091-go/issues/195) +- concurrent dispatch data race + [\#226](https://github.com/rabbitmq/amqp091-go/issues/226) **Merged pull requests:** -- Fix data race in example [\#260](https://github.com/rabbitmq/amqp091-go/pull/260) ([Zerpet](https://github.com/Zerpet)) -- Address CodeQL warning [\#252](https://github.com/rabbitmq/amqp091-go/pull/252) ([lukebakken](https://github.com/lukebakken)) -- Add support for additional AMQP URI query parameters [\#251](https://github.com/rabbitmq/amqp091-go/pull/251) ([vilius-g](https://github.com/vilius-g)) -- Example fix [\#250](https://github.com/rabbitmq/amqp091-go/pull/250) ([Boris-Plato](https://github.com/Boris-Plato)) -- Increasing the code coverage [\#248](https://github.com/rabbitmq/amqp091-go/pull/248) ([edercarloscosta](https://github.com/edercarloscosta)) -- Use correct mutex to guard confirms.published [\#240](https://github.com/rabbitmq/amqp091-go/pull/240) ([hjr265](https://github.com/hjr265)) -- Documenting Publishing.Expiration usage [\#232](https://github.com/rabbitmq/amqp091-go/pull/232) ([niksteff](https://github.com/niksteff)) -- fix comment typo in example\_client\_test.go [\#228](https://github.com/rabbitmq/amqp091-go/pull/228) ([wisaTong](https://github.com/wisaTong)) -- Bump go.uber.org/goleak from 1.2.1 to 1.3.0 [\#227](https://github.com/rabbitmq/amqp091-go/pull/227) ([dependabot[bot]](https://github.com/apps/dependabot)) +- Fix data race in example + [\#260](https://github.com/rabbitmq/amqp091-go/pull/260) + ([Zerpet](https://github.com/Zerpet)) +- Address CodeQL warning + [\#252](https://github.com/rabbitmq/amqp091-go/pull/252) + ([lukebakken](https://github.com/lukebakken)) +- Add support for additional AMQP URI query parameters + [\#251](https://github.com/rabbitmq/amqp091-go/pull/251) + ([vilius-g](https://github.com/vilius-g)) +- Example fix [\#250](https://github.com/rabbitmq/amqp091-go/pull/250) + ([Boris-Plato](https://github.com/Boris-Plato)) +- Increasing the code coverage + [\#248](https://github.com/rabbitmq/amqp091-go/pull/248) + ([edercarloscosta](https://github.com/edercarloscosta)) +- Use correct mutex to guard confirms.published + [\#240](https://github.com/rabbitmq/amqp091-go/pull/240) + ([hjr265](https://github.com/hjr265)) +- Documenting Publishing.Expiration usage + [\#232](https://github.com/rabbitmq/amqp091-go/pull/232) + ([niksteff](https://github.com/niksteff)) +- fix comment typo in example_client_test.go + [\#228](https://github.com/rabbitmq/amqp091-go/pull/228) + ([wisaTong](https://github.com/wisaTong)) +- Bump go.uber.org/goleak from 1.2.1 to 1.3.0 + [\#227](https://github.com/rabbitmq/amqp091-go/pull/227) + ([dependabot[bot]](https://github.com/apps/dependabot)) ## [v1.9.0](https://github.com/rabbitmq/amqp091-go/tree/v1.9.0) (2023-10-02) @@ -40,33 +68,59 @@ **Implemented enhancements:** -- Use of buffered delivery channels when prefetch\_count is not null [\#200](https://github.com/rabbitmq/amqp091-go/issues/200) +- Use of buffered delivery channels when prefetch_count is not null + [\#200](https://github.com/rabbitmq/amqp091-go/issues/200) **Fixed bugs:** -- connection block when write connection reset by peer [\#222](https://github.com/rabbitmq/amqp091-go/issues/222) -- Test failure on 32bit architectures [\#202](https://github.com/rabbitmq/amqp091-go/issues/202) +- connection block when write connection reset by peer + [\#222](https://github.com/rabbitmq/amqp091-go/issues/222) +- Test failure on 32bit architectures + [\#202](https://github.com/rabbitmq/amqp091-go/issues/202) **Closed issues:** -- Add a constant to set consumer timeout as queue argument [\#201](https://github.com/rabbitmq/amqp091-go/issues/201) -- Add a constant for CQ version [\#199](https://github.com/rabbitmq/amqp091-go/issues/199) -- Examples may need to be updated after \#140 [\#153](https://github.com/rabbitmq/amqp091-go/issues/153) +- Add a constant to set consumer timeout as queue argument + [\#201](https://github.com/rabbitmq/amqp091-go/issues/201) +- Add a constant for CQ version + [\#199](https://github.com/rabbitmq/amqp091-go/issues/199) +- Examples may need to be updated after \#140 + [\#153](https://github.com/rabbitmq/amqp091-go/issues/153) **Merged pull requests:** -- Update spec091.go [\#224](https://github.com/rabbitmq/amqp091-go/pull/224) ([pinkfish](https://github.com/pinkfish)) -- Closes 222 [\#223](https://github.com/rabbitmq/amqp091-go/pull/223) ([yywing](https://github.com/yywing)) -- Update write.go [\#221](https://github.com/rabbitmq/amqp091-go/pull/221) ([pinkfish](https://github.com/pinkfish)) -- Bump versions [\#219](https://github.com/rabbitmq/amqp091-go/pull/219) ([lukebakken](https://github.com/lukebakken)) -- remove extra word 'accept' from ExchangeDeclare description [\#217](https://github.com/rabbitmq/amqp091-go/pull/217) ([a-sabzian](https://github.com/a-sabzian)) -- Misc Windows CI updates [\#216](https://github.com/rabbitmq/amqp091-go/pull/216) ([lukebakken](https://github.com/lukebakken)) -- Stop using deprecated Publish function [\#207](https://github.com/rabbitmq/amqp091-go/pull/207) ([Zerpet](https://github.com/Zerpet)) -- Constant for consumer timeout queue argument [\#206](https://github.com/rabbitmq/amqp091-go/pull/206) ([Zerpet](https://github.com/Zerpet)) -- Add a constant for CQ v2 queue argument [\#205](https://github.com/rabbitmq/amqp091-go/pull/205) ([Zerpet](https://github.com/Zerpet)) -- Fix example for 32-bit compatibility [\#204](https://github.com/rabbitmq/amqp091-go/pull/204) ([Zerpet](https://github.com/Zerpet)) -- Fix to increase timeout milliseconds since it's too tight [\#203](https://github.com/rabbitmq/amqp091-go/pull/203) ([t2y](https://github.com/t2y)) -- Add Channel.ConsumeWithContext to be able to cancel delivering [\#192](https://github.com/rabbitmq/amqp091-go/pull/192) ([t2y](https://github.com/t2y)) +- Update spec091.go [\#224](https://github.com/rabbitmq/amqp091-go/pull/224) + ([pinkfish](https://github.com/pinkfish)) +- Closes 222 [\#223](https://github.com/rabbitmq/amqp091-go/pull/223) + ([yywing](https://github.com/yywing)) +- Update write.go [\#221](https://github.com/rabbitmq/amqp091-go/pull/221) + ([pinkfish](https://github.com/pinkfish)) +- Bump versions [\#219](https://github.com/rabbitmq/amqp091-go/pull/219) + ([lukebakken](https://github.com/lukebakken)) +- remove extra word 'accept' from ExchangeDeclare description + [\#217](https://github.com/rabbitmq/amqp091-go/pull/217) + ([a-sabzian](https://github.com/a-sabzian)) +- Misc Windows CI updates + [\#216](https://github.com/rabbitmq/amqp091-go/pull/216) + ([lukebakken](https://github.com/lukebakken)) +- Stop using deprecated Publish function + [\#207](https://github.com/rabbitmq/amqp091-go/pull/207) + ([Zerpet](https://github.com/Zerpet)) +- Constant for consumer timeout queue argument + [\#206](https://github.com/rabbitmq/amqp091-go/pull/206) + ([Zerpet](https://github.com/Zerpet)) +- Add a constant for CQ v2 queue argument + [\#205](https://github.com/rabbitmq/amqp091-go/pull/205) + ([Zerpet](https://github.com/Zerpet)) +- Fix example for 32-bit compatibility + [\#204](https://github.com/rabbitmq/amqp091-go/pull/204) + ([Zerpet](https://github.com/Zerpet)) +- Fix to increase timeout milliseconds since it's too tight + [\#203](https://github.com/rabbitmq/amqp091-go/pull/203) + ([t2y](https://github.com/t2y)) +- Add Channel.ConsumeWithContext to be able to cancel delivering + [\#192](https://github.com/rabbitmq/amqp091-go/pull/192) + ([t2y](https://github.com/t2y)) ## [v1.8.1](https://github.com/rabbitmq/amqp091-go/tree/v1.8.1) (2023-05-04) @@ -74,11 +128,14 @@ **Fixed bugs:** -- Fixed incorrect version reported in client properties [52ce2efd03c53dcf77d5496977da46840e9abd24](https://github.com/rabbitmq/amqp091-go/commit/52ce2efd03c53dcf77d5496977da46840e9abd24) +- Fixed incorrect version reported in client properties + [52ce2efd03c53dcf77d5496977da46840e9abd24](https://github.com/rabbitmq/amqp091-go/commit/52ce2efd03c53dcf77d5496977da46840e9abd24) **Merged pull requests:** -- Fix Example Client not reconnecting [\#186](https://github.com/rabbitmq/amqp091-go/pull/186) ([frankfil](https://github.com/frankfil)) +- Fix Example Client not reconnecting + [\#186](https://github.com/rabbitmq/amqp091-go/pull/186) + ([frankfil](https://github.com/frankfil)) ## [v1.8.0](https://github.com/rabbitmq/amqp091-go/tree/v1.8.0) (2023-03-21) @@ -86,15 +143,23 @@ **Closed issues:** -- memory leak [\#179](https://github.com/rabbitmq/amqp091-go/issues/179) -- the publishWithContext interface will not return when it times out [\#178](https://github.com/rabbitmq/amqp091-go/issues/178) +- memory leak [\#179](https://github.com/rabbitmq/amqp091-go/issues/179) +- the publishWithContext interface will not return when it times out + [\#178](https://github.com/rabbitmq/amqp091-go/issues/178) **Merged pull requests:** -- Fix race condition on confirms [\#183](https://github.com/rabbitmq/amqp091-go/pull/183) ([calloway-jacob](https://github.com/calloway-jacob)) -- Add a CloseDeadline function to Connection [\#181](https://github.com/rabbitmq/amqp091-go/pull/181) ([Zerpet](https://github.com/Zerpet)) -- Fix memory leaks [\#180](https://github.com/rabbitmq/amqp091-go/pull/180) ([GXKe](https://github.com/GXKe)) -- Bump go.uber.org/goleak from 1.2.0 to 1.2.1 [\#177](https://github.com/rabbitmq/amqp091-go/pull/177) ([dependabot[bot]](https://github.com/apps/dependabot)) +- Fix race condition on confirms + [\#183](https://github.com/rabbitmq/amqp091-go/pull/183) + ([calloway-jacob](https://github.com/calloway-jacob)) +- Add a CloseDeadline function to Connection + [\#181](https://github.com/rabbitmq/amqp091-go/pull/181) + ([Zerpet](https://github.com/Zerpet)) +- Fix memory leaks [\#180](https://github.com/rabbitmq/amqp091-go/pull/180) + ([GXKe](https://github.com/GXKe)) +- Bump go.uber.org/goleak from 1.2.0 to 1.2.1 + [\#177](https://github.com/rabbitmq/amqp091-go/pull/177) + ([dependabot[bot]](https://github.com/apps/dependabot)) ## [v1.7.0](https://github.com/rabbitmq/amqp091-go/tree/v1.7.0) (2023-02-09) @@ -102,17 +167,29 @@ **Closed issues:** -- \#31 resurfacing \(?\) [\#170](https://github.com/rabbitmq/amqp091-go/issues/170) -- Deprecate QueueInspect [\#167](https://github.com/rabbitmq/amqp091-go/issues/167) -- v1.6.0 causing rabbit connection errors [\#160](https://github.com/rabbitmq/amqp091-go/issues/160) +- \#31 resurfacing \(?\) + [\#170](https://github.com/rabbitmq/amqp091-go/issues/170) +- Deprecate QueueInspect + [\#167](https://github.com/rabbitmq/amqp091-go/issues/167) +- v1.6.0 causing rabbit connection errors + [\#160](https://github.com/rabbitmq/amqp091-go/issues/160) **Merged pull requests:** -- Set channels and allocator to nil in shutdown [\#172](https://github.com/rabbitmq/amqp091-go/pull/172) ([lukebakken](https://github.com/lukebakken)) -- Fix racing in Open [\#171](https://github.com/rabbitmq/amqp091-go/pull/171) ([Zerpet](https://github.com/Zerpet)) -- adding go 1.20 to tests [\#169](https://github.com/rabbitmq/amqp091-go/pull/169) ([halilylm](https://github.com/halilylm)) -- Deprecate the QueueInspect function [\#168](https://github.com/rabbitmq/amqp091-go/pull/168) ([lukebakken](https://github.com/lukebakken)) -- Check if channel is nil before updating it [\#150](https://github.com/rabbitmq/amqp091-go/pull/150) ([julienschmidt](https://github.com/julienschmidt)) +- Set channels and allocator to nil in shutdown + [\#172](https://github.com/rabbitmq/amqp091-go/pull/172) + ([lukebakken](https://github.com/lukebakken)) +- Fix racing in Open [\#171](https://github.com/rabbitmq/amqp091-go/pull/171) + ([Zerpet](https://github.com/Zerpet)) +- adding go 1.20 to tests + [\#169](https://github.com/rabbitmq/amqp091-go/pull/169) + ([halilylm](https://github.com/halilylm)) +- Deprecate the QueueInspect function + [\#168](https://github.com/rabbitmq/amqp091-go/pull/168) + ([lukebakken](https://github.com/lukebakken)) +- Check if channel is nil before updating it + [\#150](https://github.com/rabbitmq/amqp091-go/pull/150) + ([julienschmidt](https://github.com/julienschmidt)) ## [v1.6.1](https://github.com/rabbitmq/amqp091-go/tree/v1.6.1) (2023-02-01) @@ -120,7 +197,9 @@ **Merged pull requests:** -- Update Makefile targets related to RabbitMQ [\#163](https://github.com/rabbitmq/amqp091-go/pull/163) ([Zerpet](https://github.com/Zerpet)) +- Update Makefile targets related to RabbitMQ + [\#163](https://github.com/rabbitmq/amqp091-go/pull/163) + ([Zerpet](https://github.com/Zerpet)) ## [v1.6.1-rc.2](https://github.com/rabbitmq/amqp091-go/tree/v1.6.1-rc.2) (2023-01-31) @@ -128,7 +207,9 @@ **Merged pull requests:** -- Do not overly protect writes [\#162](https://github.com/rabbitmq/amqp091-go/pull/162) ([lukebakken](https://github.com/lukebakken)) +- Do not overly protect writes + [\#162](https://github.com/rabbitmq/amqp091-go/pull/162) + ([lukebakken](https://github.com/lukebakken)) ## [v1.6.1-rc.1](https://github.com/rabbitmq/amqp091-go/tree/v1.6.1-rc.1) (2023-01-31) @@ -136,11 +217,14 @@ **Closed issues:** -- Calling Channel\(\) on an empty connection panics [\#148](https://github.com/rabbitmq/amqp091-go/issues/148) +- Calling Channel\(\) on an empty connection panics + [\#148](https://github.com/rabbitmq/amqp091-go/issues/148) **Merged pull requests:** -- Ensure flush happens and correctly lock connection for a series of unflushed writes [\#161](https://github.com/rabbitmq/amqp091-go/pull/161) ([lukebakken](https://github.com/lukebakken)) +- Ensure flush happens and correctly lock connection for a series of unflushed + writes [\#161](https://github.com/rabbitmq/amqp091-go/pull/161) + ([lukebakken](https://github.com/lukebakken)) ## [v1.6.0](https://github.com/rabbitmq/amqp091-go/tree/v1.6.0) (2023-01-20) @@ -148,35 +232,71 @@ **Implemented enhancements:** -- Add constants for Queue arguments [\#145](https://github.com/rabbitmq/amqp091-go/pull/145) ([Zerpet](https://github.com/Zerpet)) +- Add constants for Queue arguments + [\#145](https://github.com/rabbitmq/amqp091-go/pull/145) + ([Zerpet](https://github.com/Zerpet)) **Closed issues:** -- README not up to date [\#154](https://github.com/rabbitmq/amqp091-go/issues/154) -- Allow re-using default connection config \(custom properties\) [\#152](https://github.com/rabbitmq/amqp091-go/issues/152) -- Rename package name to amqp in V2 [\#151](https://github.com/rabbitmq/amqp091-go/issues/151) -- Helper types to declare quorum queues [\#144](https://github.com/rabbitmq/amqp091-go/issues/144) -- Inefficient use of buffers reduces potential throughput for basicPublish with small messages. [\#141](https://github.com/rabbitmq/amqp091-go/issues/141) -- bug, close cause panic [\#130](https://github.com/rabbitmq/amqp091-go/issues/130) -- Publishing Headers are unable to store Table with slice values [\#125](https://github.com/rabbitmq/amqp091-go/issues/125) -- Example client can deadlock in Close due to unconsumed confirmations [\#122](https://github.com/rabbitmq/amqp091-go/issues/122) -- SAC not working properly [\#106](https://github.com/rabbitmq/amqp091-go/issues/106) +- README not up to date + [\#154](https://github.com/rabbitmq/amqp091-go/issues/154) +- Allow re-using default connection config \(custom properties\) + [\#152](https://github.com/rabbitmq/amqp091-go/issues/152) +- Rename package name to amqp in V2 + [\#151](https://github.com/rabbitmq/amqp091-go/issues/151) +- Helper types to declare quorum queues + [\#144](https://github.com/rabbitmq/amqp091-go/issues/144) +- Inefficient use of buffers reduces potential throughput for basicPublish with + small messages. [\#141](https://github.com/rabbitmq/amqp091-go/issues/141) +- bug, close cause panic + [\#130](https://github.com/rabbitmq/amqp091-go/issues/130) +- Publishing Headers are unable to store Table with slice values + [\#125](https://github.com/rabbitmq/amqp091-go/issues/125) +- Example client can deadlock in Close due to unconsumed confirmations + [\#122](https://github.com/rabbitmq/amqp091-go/issues/122) +- SAC not working properly + [\#106](https://github.com/rabbitmq/amqp091-go/issues/106) **Merged pull requests:** -- Add automatic CHANGELOG.md generation [\#158](https://github.com/rabbitmq/amqp091-go/pull/158) ([lukebakken](https://github.com/lukebakken)) -- Supply library-defined props with NewConnectionProperties [\#157](https://github.com/rabbitmq/amqp091-go/pull/157) ([slagiewka](https://github.com/slagiewka)) -- Fix linter warnings [\#156](https://github.com/rabbitmq/amqp091-go/pull/156) ([Zerpet](https://github.com/Zerpet)) -- Remove outdated information from README [\#155](https://github.com/rabbitmq/amqp091-go/pull/155) ([scriptcoded](https://github.com/scriptcoded)) -- Add example producer using DeferredConfirm [\#149](https://github.com/rabbitmq/amqp091-go/pull/149) ([Zerpet](https://github.com/Zerpet)) -- Ensure code is formatted [\#147](https://github.com/rabbitmq/amqp091-go/pull/147) ([lukebakken](https://github.com/lukebakken)) -- Fix inefficient use of buffers that reduces the potential throughput of basicPublish [\#142](https://github.com/rabbitmq/amqp091-go/pull/142) ([fadams](https://github.com/fadams)) -- Do not embed context in DeferredConfirmation [\#140](https://github.com/rabbitmq/amqp091-go/pull/140) ([tie](https://github.com/tie)) -- Add constant for default exchange [\#139](https://github.com/rabbitmq/amqp091-go/pull/139) ([marlongerson](https://github.com/marlongerson)) -- Fix indentation and remove unnecessary instructions [\#138](https://github.com/rabbitmq/amqp091-go/pull/138) ([alraujo](https://github.com/alraujo)) -- Remove unnecessary instruction [\#135](https://github.com/rabbitmq/amqp091-go/pull/135) ([alraujo](https://github.com/alraujo)) -- Fix example client to avoid deadlock in Close [\#123](https://github.com/rabbitmq/amqp091-go/pull/123) ([Zerpet](https://github.com/Zerpet)) -- Bump go.uber.org/goleak from 1.1.12 to 1.2.0 [\#116](https://github.com/rabbitmq/amqp091-go/pull/116) ([dependabot[bot]](https://github.com/apps/dependabot)) +- Add automatic CHANGELOG.md generation + [\#158](https://github.com/rabbitmq/amqp091-go/pull/158) + ([lukebakken](https://github.com/lukebakken)) +- Supply library-defined props with NewConnectionProperties + [\#157](https://github.com/rabbitmq/amqp091-go/pull/157) + ([slagiewka](https://github.com/slagiewka)) +- Fix linter warnings [\#156](https://github.com/rabbitmq/amqp091-go/pull/156) + ([Zerpet](https://github.com/Zerpet)) +- Remove outdated information from README + [\#155](https://github.com/rabbitmq/amqp091-go/pull/155) + ([scriptcoded](https://github.com/scriptcoded)) +- Add example producer using DeferredConfirm + [\#149](https://github.com/rabbitmq/amqp091-go/pull/149) + ([Zerpet](https://github.com/Zerpet)) +- Ensure code is formatted + [\#147](https://github.com/rabbitmq/amqp091-go/pull/147) + ([lukebakken](https://github.com/lukebakken)) +- Fix inefficient use of buffers that reduces the potential throughput of + basicPublish [\#142](https://github.com/rabbitmq/amqp091-go/pull/142) + ([fadams](https://github.com/fadams)) +- Do not embed context in DeferredConfirmation + [\#140](https://github.com/rabbitmq/amqp091-go/pull/140) + ([tie](https://github.com/tie)) +- Add constant for default exchange + [\#139](https://github.com/rabbitmq/amqp091-go/pull/139) + ([marlongerson](https://github.com/marlongerson)) +- Fix indentation and remove unnecessary instructions + [\#138](https://github.com/rabbitmq/amqp091-go/pull/138) + ([alraujo](https://github.com/alraujo)) +- Remove unnecessary instruction + [\#135](https://github.com/rabbitmq/amqp091-go/pull/135) + ([alraujo](https://github.com/alraujo)) +- Fix example client to avoid deadlock in Close + [\#123](https://github.com/rabbitmq/amqp091-go/pull/123) + ([Zerpet](https://github.com/Zerpet)) +- Bump go.uber.org/goleak from 1.1.12 to 1.2.0 + [\#116](https://github.com/rabbitmq/amqp091-go/pull/116) + ([dependabot[bot]](https://github.com/apps/dependabot)) ## [v1.5.0](https://github.com/rabbitmq/amqp091-go/tree/v1.5.0) (2022-09-07) @@ -184,21 +304,35 @@ **Implemented enhancements:** -- Provide a friendly way to set connection name [\#105](https://github.com/rabbitmq/amqp091-go/issues/105) +- Provide a friendly way to set connection name + [\#105](https://github.com/rabbitmq/amqp091-go/issues/105) **Closed issues:** -- Support connection.update-secret [\#107](https://github.com/rabbitmq/amqp091-go/issues/107) -- Example Client: Implementation of a Consumer with reconnection support [\#40](https://github.com/rabbitmq/amqp091-go/issues/40) +- Support connection.update-secret + [\#107](https://github.com/rabbitmq/amqp091-go/issues/107) +- Example Client: Implementation of a Consumer with reconnection support + [\#40](https://github.com/rabbitmq/amqp091-go/issues/40) **Merged pull requests:** -- use PublishWithContext instead of Publish [\#115](https://github.com/rabbitmq/amqp091-go/pull/115) ([Gsantomaggio](https://github.com/Gsantomaggio)) -- Add support for connection.update-secret [\#114](https://github.com/rabbitmq/amqp091-go/pull/114) ([Zerpet](https://github.com/Zerpet)) -- Remove warning on RabbitMQ tutorials in go [\#113](https://github.com/rabbitmq/amqp091-go/pull/113) ([ChunyiLyu](https://github.com/ChunyiLyu)) -- Update AMQP Spec [\#110](https://github.com/rabbitmq/amqp091-go/pull/110) ([Zerpet](https://github.com/Zerpet)) -- Add an example of reliable consumer [\#109](https://github.com/rabbitmq/amqp091-go/pull/109) ([Zerpet](https://github.com/Zerpet)) -- Add convenience function to set connection name [\#108](https://github.com/rabbitmq/amqp091-go/pull/108) ([Zerpet](https://github.com/Zerpet)) +- use PublishWithContext instead of Publish + [\#115](https://github.com/rabbitmq/amqp091-go/pull/115) + ([Gsantomaggio](https://github.com/Gsantomaggio)) +- Add support for connection.update-secret + [\#114](https://github.com/rabbitmq/amqp091-go/pull/114) + ([Zerpet](https://github.com/Zerpet)) +- Remove warning on RabbitMQ tutorials in go + [\#113](https://github.com/rabbitmq/amqp091-go/pull/113) + ([ChunyiLyu](https://github.com/ChunyiLyu)) +- Update AMQP Spec [\#110](https://github.com/rabbitmq/amqp091-go/pull/110) + ([Zerpet](https://github.com/Zerpet)) +- Add an example of reliable consumer + [\#109](https://github.com/rabbitmq/amqp091-go/pull/109) + ([Zerpet](https://github.com/Zerpet)) +- Add convenience function to set connection name + [\#108](https://github.com/rabbitmq/amqp091-go/pull/108) + ([Zerpet](https://github.com/Zerpet)) ## [v1.4.0](https://github.com/rabbitmq/amqp091-go/tree/v1.4.0) (2022-07-19) @@ -206,44 +340,94 @@ **Closed issues:** -- target machine actively refused connection [\#99](https://github.com/rabbitmq/amqp091-go/issues/99) -- 504 channel/connection is not open error occurred in multiple connection with same rabbitmq service [\#97](https://github.com/rabbitmq/amqp091-go/issues/97) -- Add possible cancel of DeferredConfirmation [\#92](https://github.com/rabbitmq/amqp091-go/issues/92) -- Documentation [\#89](https://github.com/rabbitmq/amqp091-go/issues/89) -- Channel Close gets stuck after closing a connection \(via management UI\) [\#88](https://github.com/rabbitmq/amqp091-go/issues/88) -- this library has same issue [\#83](https://github.com/rabbitmq/amqp091-go/issues/83) -- Provide a logging interface [\#81](https://github.com/rabbitmq/amqp091-go/issues/81) -- 1.4.0 release checklist [\#77](https://github.com/rabbitmq/amqp091-go/issues/77) -- Data race in the client example [\#72](https://github.com/rabbitmq/amqp091-go/issues/72) -- reader go routine hangs and leaks when Connection.Close\(\) is called multiple times [\#69](https://github.com/rabbitmq/amqp091-go/issues/69) -- Support auto-reconnect and cluster [\#65](https://github.com/rabbitmq/amqp091-go/issues/65) -- Connection/Channel Deadlock [\#32](https://github.com/rabbitmq/amqp091-go/issues/32) -- Closing connection and/or channel hangs NotifyPublish is used [\#21](https://github.com/rabbitmq/amqp091-go/issues/21) -- Consumer channel isn't closed in the event of unexpected disconnection [\#18](https://github.com/rabbitmq/amqp091-go/issues/18) +- target machine actively refused connection + [\#99](https://github.com/rabbitmq/amqp091-go/issues/99) +- 504 channel/connection is not open error occurred in multiple connection with + same rabbitmq service [\#97](https://github.com/rabbitmq/amqp091-go/issues/97) +- Add possible cancel of DeferredConfirmation + [\#92](https://github.com/rabbitmq/amqp091-go/issues/92) +- Documentation [\#89](https://github.com/rabbitmq/amqp091-go/issues/89) +- Channel Close gets stuck after closing a connection \(via management UI\) + [\#88](https://github.com/rabbitmq/amqp091-go/issues/88) +- this library has same issue + [\#83](https://github.com/rabbitmq/amqp091-go/issues/83) +- Provide a logging interface + [\#81](https://github.com/rabbitmq/amqp091-go/issues/81) +- 1.4.0 release checklist + [\#77](https://github.com/rabbitmq/amqp091-go/issues/77) +- Data race in the client example + [\#72](https://github.com/rabbitmq/amqp091-go/issues/72) +- reader go routine hangs and leaks when Connection.Close\(\) is called multiple + times [\#69](https://github.com/rabbitmq/amqp091-go/issues/69) +- Support auto-reconnect and cluster + [\#65](https://github.com/rabbitmq/amqp091-go/issues/65) +- Connection/Channel Deadlock + [\#32](https://github.com/rabbitmq/amqp091-go/issues/32) +- Closing connection and/or channel hangs NotifyPublish is used + [\#21](https://github.com/rabbitmq/amqp091-go/issues/21) +- Consumer channel isn't closed in the event of unexpected disconnection + [\#18](https://github.com/rabbitmq/amqp091-go/issues/18) **Merged pull requests:** -- fix race condition with context close and confirm at the same time on DeferredConfirmation. [\#101](https://github.com/rabbitmq/amqp091-go/pull/101) ([sapk](https://github.com/sapk)) -- Add build TLS config from URI [\#98](https://github.com/rabbitmq/amqp091-go/pull/98) ([reddec](https://github.com/reddec)) -- Use context for Publish methods [\#96](https://github.com/rabbitmq/amqp091-go/pull/96) ([sapk](https://github.com/sapk)) -- Added function to get the remote peer's IP address \(conn.RemoteAddr\(\)\) [\#95](https://github.com/rabbitmq/amqp091-go/pull/95) ([rabb1t](https://github.com/rabb1t)) -- Update connection documentation [\#90](https://github.com/rabbitmq/amqp091-go/pull/90) ([Zerpet](https://github.com/Zerpet)) -- Revert test to demonstrate actual bug [\#87](https://github.com/rabbitmq/amqp091-go/pull/87) ([lukebakken](https://github.com/lukebakken)) -- Minor improvements to examples [\#86](https://github.com/rabbitmq/amqp091-go/pull/86) ([lukebakken](https://github.com/lukebakken)) -- Do not skip flaky test in CI [\#85](https://github.com/rabbitmq/amqp091-go/pull/85) ([lukebakken](https://github.com/lukebakken)) -- Add logging [\#84](https://github.com/rabbitmq/amqp091-go/pull/84) ([lukebakken](https://github.com/lukebakken)) -- Add a win32 build [\#82](https://github.com/rabbitmq/amqp091-go/pull/82) ([lukebakken](https://github.com/lukebakken)) -- channel: return nothing instead of always a nil-error in receive methods [\#80](https://github.com/rabbitmq/amqp091-go/pull/80) ([fho](https://github.com/fho)) -- update the contributing & readme files, improve makefile [\#79](https://github.com/rabbitmq/amqp091-go/pull/79) ([fho](https://github.com/fho)) -- Fix lint errors [\#78](https://github.com/rabbitmq/amqp091-go/pull/78) ([lukebakken](https://github.com/lukebakken)) -- ci: run golangci-lint [\#76](https://github.com/rabbitmq/amqp091-go/pull/76) ([fho](https://github.com/fho)) -- ci: run test via make & remove travis CI config [\#75](https://github.com/rabbitmq/amqp091-go/pull/75) ([fho](https://github.com/fho)) -- ci: run tests with race detector [\#74](https://github.com/rabbitmq/amqp091-go/pull/74) ([fho](https://github.com/fho)) -- Detect go routine leaks in integration testcases [\#73](https://github.com/rabbitmq/amqp091-go/pull/73) ([fho](https://github.com/fho)) -- connection: fix: reader go-routine is leaked on connection close [\#70](https://github.com/rabbitmq/amqp091-go/pull/70) ([fho](https://github.com/fho)) -- adding best practises for NotifyPublish for issue\_21 scenario [\#68](https://github.com/rabbitmq/amqp091-go/pull/68) ([DanielePalaia](https://github.com/DanielePalaia)) -- Update Go version [\#67](https://github.com/rabbitmq/amqp091-go/pull/67) ([Zerpet](https://github.com/Zerpet)) -- Regenerate certs with SHA256 to fix test with Go 1.18+ [\#66](https://github.com/rabbitmq/amqp091-go/pull/66) ([anthonyfok](https://github.com/anthonyfok)) +- fix race condition with context close and confirm at the same time on + DeferredConfirmation. [\#101](https://github.com/rabbitmq/amqp091-go/pull/101) + ([sapk](https://github.com/sapk)) +- Add build TLS config from URI + [\#98](https://github.com/rabbitmq/amqp091-go/pull/98) + ([reddec](https://github.com/reddec)) +- Use context for Publish methods + [\#96](https://github.com/rabbitmq/amqp091-go/pull/96) + ([sapk](https://github.com/sapk)) +- Added function to get the remote peer's IP address \(conn.RemoteAddr\(\)\) + [\#95](https://github.com/rabbitmq/amqp091-go/pull/95) + ([rabb1t](https://github.com/rabb1t)) +- Update connection documentation + [\#90](https://github.com/rabbitmq/amqp091-go/pull/90) + ([Zerpet](https://github.com/Zerpet)) +- Revert test to demonstrate actual bug + [\#87](https://github.com/rabbitmq/amqp091-go/pull/87) + ([lukebakken](https://github.com/lukebakken)) +- Minor improvements to examples + [\#86](https://github.com/rabbitmq/amqp091-go/pull/86) + ([lukebakken](https://github.com/lukebakken)) +- Do not skip flaky test in CI + [\#85](https://github.com/rabbitmq/amqp091-go/pull/85) + ([lukebakken](https://github.com/lukebakken)) +- Add logging [\#84](https://github.com/rabbitmq/amqp091-go/pull/84) + ([lukebakken](https://github.com/lukebakken)) +- Add a win32 build [\#82](https://github.com/rabbitmq/amqp091-go/pull/82) + ([lukebakken](https://github.com/lukebakken)) +- channel: return nothing instead of always a nil-error in receive methods + [\#80](https://github.com/rabbitmq/amqp091-go/pull/80) + ([fho](https://github.com/fho)) +- update the contributing & readme files, improve makefile + [\#79](https://github.com/rabbitmq/amqp091-go/pull/79) + ([fho](https://github.com/fho)) +- Fix lint errors [\#78](https://github.com/rabbitmq/amqp091-go/pull/78) + ([lukebakken](https://github.com/lukebakken)) +- ci: run golangci-lint [\#76](https://github.com/rabbitmq/amqp091-go/pull/76) + ([fho](https://github.com/fho)) +- ci: run test via make & remove travis CI config + [\#75](https://github.com/rabbitmq/amqp091-go/pull/75) + ([fho](https://github.com/fho)) +- ci: run tests with race detector + [\#74](https://github.com/rabbitmq/amqp091-go/pull/74) + ([fho](https://github.com/fho)) +- Detect go routine leaks in integration testcases + [\#73](https://github.com/rabbitmq/amqp091-go/pull/73) + ([fho](https://github.com/fho)) +- connection: fix: reader go-routine is leaked on connection close + [\#70](https://github.com/rabbitmq/amqp091-go/pull/70) + ([fho](https://github.com/fho)) +- adding best practises for NotifyPublish for issue_21 scenario + [\#68](https://github.com/rabbitmq/amqp091-go/pull/68) + ([DanielePalaia](https://github.com/DanielePalaia)) +- Update Go version [\#67](https://github.com/rabbitmq/amqp091-go/pull/67) + ([Zerpet](https://github.com/Zerpet)) +- Regenerate certs with SHA256 to fix test with Go 1.18+ + [\#66](https://github.com/rabbitmq/amqp091-go/pull/66) + ([anthonyfok](https://github.com/anthonyfok)) ## [v1.3.4](https://github.com/rabbitmq/amqp091-go/tree/v1.3.4) (2022-04-01) @@ -251,8 +435,10 @@ **Merged pull requests:** -- bump version to 1.3.4 [\#63](https://github.com/rabbitmq/amqp091-go/pull/63) ([DanielePalaia](https://github.com/DanielePalaia)) -- updating doc [\#62](https://github.com/rabbitmq/amqp091-go/pull/62) ([DanielePalaia](https://github.com/DanielePalaia)) +- bump version to 1.3.4 [\#63](https://github.com/rabbitmq/amqp091-go/pull/63) + ([DanielePalaia](https://github.com/DanielePalaia)) +- updating doc [\#62](https://github.com/rabbitmq/amqp091-go/pull/62) + ([DanielePalaia](https://github.com/DanielePalaia)) ## [v1.3.3](https://github.com/rabbitmq/amqp091-go/tree/v1.3.3) (2022-04-01) @@ -261,13 +447,20 @@ **Closed issues:** - Add Client Version [\#49](https://github.com/rabbitmq/amqp091-go/issues/49) -- OpenTelemetry Propagation [\#22](https://github.com/rabbitmq/amqp091-go/issues/22) +- OpenTelemetry Propagation + [\#22](https://github.com/rabbitmq/amqp091-go/issues/22) **Merged pull requests:** -- bump buildVersion for release [\#61](https://github.com/rabbitmq/amqp091-go/pull/61) ([DanielePalaia](https://github.com/DanielePalaia)) -- adding documentation for notifyClose best pratices [\#60](https://github.com/rabbitmq/amqp091-go/pull/60) ([DanielePalaia](https://github.com/DanielePalaia)) -- adding documentation on NotifyClose of connection and channel to enfo… [\#59](https://github.com/rabbitmq/amqp091-go/pull/59) ([DanielePalaia](https://github.com/DanielePalaia)) +- bump buildVersion for release + [\#61](https://github.com/rabbitmq/amqp091-go/pull/61) + ([DanielePalaia](https://github.com/DanielePalaia)) +- adding documentation for notifyClose best pratices + [\#60](https://github.com/rabbitmq/amqp091-go/pull/60) + ([DanielePalaia](https://github.com/DanielePalaia)) +- adding documentation on NotifyClose of connection and channel to enfo… + [\#59](https://github.com/rabbitmq/amqp091-go/pull/59) + ([DanielePalaia](https://github.com/DanielePalaia)) ## [v1.3.2](https://github.com/rabbitmq/amqp091-go/tree/v1.3.2) (2022-03-28) @@ -275,11 +468,14 @@ **Closed issues:** -- Potential race condition in Connection module [\#31](https://github.com/rabbitmq/amqp091-go/issues/31) +- Potential race condition in Connection module + [\#31](https://github.com/rabbitmq/amqp091-go/issues/31) **Merged pull requests:** -- bump versioning to 1.3.2 [\#58](https://github.com/rabbitmq/amqp091-go/pull/58) ([DanielePalaia](https://github.com/DanielePalaia)) +- bump versioning to 1.3.2 + [\#58](https://github.com/rabbitmq/amqp091-go/pull/58) + ([DanielePalaia](https://github.com/DanielePalaia)) ## [v1.3.1](https://github.com/rabbitmq/amqp091-go/tree/v1.3.1) (2022-03-25) @@ -287,24 +483,49 @@ **Closed issues:** -- Possible deadlock on DeferredConfirmation.Wait\(\) [\#46](https://github.com/rabbitmq/amqp091-go/issues/46) -- Call to Delivery.Ack blocks indefinitely in case of disconnection [\#19](https://github.com/rabbitmq/amqp091-go/issues/19) -- Unexpacted behavor of channel.IsClosed\(\) [\#14](https://github.com/rabbitmq/amqp091-go/issues/14) -- A possible dead lock in connection close notification Go channel [\#11](https://github.com/rabbitmq/amqp091-go/issues/11) +- Possible deadlock on DeferredConfirmation.Wait\(\) + [\#46](https://github.com/rabbitmq/amqp091-go/issues/46) +- Call to Delivery.Ack blocks indefinitely in case of disconnection + [\#19](https://github.com/rabbitmq/amqp091-go/issues/19) +- Unexpacted behavor of channel.IsClosed\(\) + [\#14](https://github.com/rabbitmq/amqp091-go/issues/14) +- A possible dead lock in connection close notification Go channel + [\#11](https://github.com/rabbitmq/amqp091-go/issues/11) **Merged pull requests:** -- These ones were the ones testing Open scenarios. The issue is that Op… [\#57](https://github.com/rabbitmq/amqp091-go/pull/57) ([DanielePalaia](https://github.com/DanielePalaia)) -- changing defaultVersion to buildVersion and create a simple change\_ve… [\#54](https://github.com/rabbitmq/amqp091-go/pull/54) ([DanielePalaia](https://github.com/DanielePalaia)) -- adding integration test for issue 11 [\#50](https://github.com/rabbitmq/amqp091-go/pull/50) ([DanielePalaia](https://github.com/DanielePalaia)) -- Remove the old link product [\#48](https://github.com/rabbitmq/amqp091-go/pull/48) ([Gsantomaggio](https://github.com/Gsantomaggio)) -- Fix deadlock on DeferredConfirmations [\#47](https://github.com/rabbitmq/amqp091-go/pull/47) ([SpencerTorres](https://github.com/SpencerTorres)) -- Example client: Rename Stream\(\) to Consume\(\) to avoid confusion with RabbitMQ streams [\#39](https://github.com/rabbitmq/amqp091-go/pull/39) ([andygrunwald](https://github.com/andygrunwald)) -- Example client: Rename `name` to `queueName` to make the usage clear and explicit [\#38](https://github.com/rabbitmq/amqp091-go/pull/38) ([andygrunwald](https://github.com/andygrunwald)) -- Client example: Renamed concept "Session" to "Client" [\#37](https://github.com/rabbitmq/amqp091-go/pull/37) ([andygrunwald](https://github.com/andygrunwald)) -- delete unuseful code [\#36](https://github.com/rabbitmq/amqp091-go/pull/36) ([liutaot](https://github.com/liutaot)) -- Client Example: Fix closing order [\#35](https://github.com/rabbitmq/amqp091-go/pull/35) ([andygrunwald](https://github.com/andygrunwald)) -- Client example: Use instance logger instead of global logger [\#34](https://github.com/rabbitmq/amqp091-go/pull/34) ([andygrunwald](https://github.com/andygrunwald)) +- These ones were the ones testing Open scenarios. The issue is that Op… + [\#57](https://github.com/rabbitmq/amqp091-go/pull/57) + ([DanielePalaia](https://github.com/DanielePalaia)) +- changing defaultVersion to buildVersion and create a simple change_ve… + [\#54](https://github.com/rabbitmq/amqp091-go/pull/54) + ([DanielePalaia](https://github.com/DanielePalaia)) +- adding integration test for issue 11 + [\#50](https://github.com/rabbitmq/amqp091-go/pull/50) + ([DanielePalaia](https://github.com/DanielePalaia)) +- Remove the old link product + [\#48](https://github.com/rabbitmq/amqp091-go/pull/48) + ([Gsantomaggio](https://github.com/Gsantomaggio)) +- Fix deadlock on DeferredConfirmations + [\#47](https://github.com/rabbitmq/amqp091-go/pull/47) + ([SpencerTorres](https://github.com/SpencerTorres)) +- Example client: Rename Stream\(\) to Consume\(\) to avoid confusion with + RabbitMQ streams [\#39](https://github.com/rabbitmq/amqp091-go/pull/39) + ([andygrunwald](https://github.com/andygrunwald)) +- Example client: Rename `name` to `queueName` to make the usage clear and + explicit [\#38](https://github.com/rabbitmq/amqp091-go/pull/38) + ([andygrunwald](https://github.com/andygrunwald)) +- Client example: Renamed concept "Session" to "Client" + [\#37](https://github.com/rabbitmq/amqp091-go/pull/37) + ([andygrunwald](https://github.com/andygrunwald)) +- delete unuseful code [\#36](https://github.com/rabbitmq/amqp091-go/pull/36) + ([liutaot](https://github.com/liutaot)) +- Client Example: Fix closing order + [\#35](https://github.com/rabbitmq/amqp091-go/pull/35) + ([andygrunwald](https://github.com/andygrunwald)) +- Client example: Use instance logger instead of global logger + [\#34](https://github.com/rabbitmq/amqp091-go/pull/34) + ([andygrunwald](https://github.com/andygrunwald)) ## [v1.3.0](https://github.com/rabbitmq/amqp091-go/tree/v1.3.0) (2022-01-13) @@ -312,13 +533,19 @@ **Closed issues:** -- documentation of changes triggering version updates [\#29](https://github.com/rabbitmq/amqp091-go/issues/29) -- Persistent messages folder [\#27](https://github.com/rabbitmq/amqp091-go/issues/27) +- documentation of changes triggering version updates + [\#29](https://github.com/rabbitmq/amqp091-go/issues/29) +- Persistent messages folder + [\#27](https://github.com/rabbitmq/amqp091-go/issues/27) **Merged pull requests:** -- Expose a method to enable out-of-order Publisher Confirms [\#33](https://github.com/rabbitmq/amqp091-go/pull/33) ([benmoss](https://github.com/benmoss)) -- Fix Signed 8-bit headers being treated as unsigned [\#26](https://github.com/rabbitmq/amqp091-go/pull/26) ([alex-goodisman](https://github.com/alex-goodisman)) +- Expose a method to enable out-of-order Publisher Confirms + [\#33](https://github.com/rabbitmq/amqp091-go/pull/33) + ([benmoss](https://github.com/benmoss)) +- Fix Signed 8-bit headers being treated as unsigned + [\#26](https://github.com/rabbitmq/amqp091-go/pull/26) + ([alex-goodisman](https://github.com/alex-goodisman)) ## [v1.2.0](https://github.com/rabbitmq/amqp091-go/tree/v1.2.0) (2021-11-17) @@ -326,16 +553,24 @@ **Closed issues:** -- No access to this vhost [\#24](https://github.com/rabbitmq/amqp091-go/issues/24) +- No access to this vhost + [\#24](https://github.com/rabbitmq/amqp091-go/issues/24) - copyright issue? [\#12](https://github.com/rabbitmq/amqp091-go/issues/12) -- A possible dead lock when publishing message with confirmation [\#10](https://github.com/rabbitmq/amqp091-go/issues/10) +- A possible dead lock when publishing message with confirmation + [\#10](https://github.com/rabbitmq/amqp091-go/issues/10) - Semver release [\#7](https://github.com/rabbitmq/amqp091-go/issues/7) **Merged pull requests:** -- Fix deadlock between publishing and receiving confirms [\#25](https://github.com/rabbitmq/amqp091-go/pull/25) ([benmoss](https://github.com/benmoss)) -- Add GetNextPublishSeqNo for channel in confirm mode [\#23](https://github.com/rabbitmq/amqp091-go/pull/23) ([kamal-github](https://github.com/kamal-github)) -- Added support for cert-only login without user and password [\#20](https://github.com/rabbitmq/amqp091-go/pull/20) ([mihaitodor](https://github.com/mihaitodor)) +- Fix deadlock between publishing and receiving confirms + [\#25](https://github.com/rabbitmq/amqp091-go/pull/25) + ([benmoss](https://github.com/benmoss)) +- Add GetNextPublishSeqNo for channel in confirm mode + [\#23](https://github.com/rabbitmq/amqp091-go/pull/23) + ([kamal-github](https://github.com/kamal-github)) +- Added support for cert-only login without user and password + [\#20](https://github.com/rabbitmq/amqp091-go/pull/20) + ([mihaitodor](https://github.com/mihaitodor)) ## [v1.1.0](https://github.com/rabbitmq/amqp091-go/tree/v1.1.0) (2021-09-21) @@ -343,21 +578,37 @@ **Closed issues:** -- AMQPLAIN authentication does not work [\#15](https://github.com/rabbitmq/amqp091-go/issues/15) +- AMQPLAIN authentication does not work + [\#15](https://github.com/rabbitmq/amqp091-go/issues/15) **Merged pull requests:** -- Fix AMQPLAIN authentication mechanism [\#16](https://github.com/rabbitmq/amqp091-go/pull/16) ([hodbn](https://github.com/hodbn)) -- connection: clarify documented behavior of NotifyClose [\#13](https://github.com/rabbitmq/amqp091-go/pull/13) ([pabigot](https://github.com/pabigot)) -- Add a link to pkg.go.dev API docs [\#9](https://github.com/rabbitmq/amqp091-go/pull/9) ([benmoss](https://github.com/benmoss)) -- add test go version 1.16.x and 1.17.x [\#8](https://github.com/rabbitmq/amqp091-go/pull/8) ([k4n4ry](https://github.com/k4n4ry)) -- fix typos [\#6](https://github.com/rabbitmq/amqp091-go/pull/6) ([h44z](https://github.com/h44z)) -- Heartbeat interval should be timeout/2 [\#5](https://github.com/rabbitmq/amqp091-go/pull/5) ([ifo20](https://github.com/ifo20)) -- Exporting Channel State [\#4](https://github.com/rabbitmq/amqp091-go/pull/4) ([eibrunorodrigues](https://github.com/eibrunorodrigues)) -- Add codeql analysis [\#3](https://github.com/rabbitmq/amqp091-go/pull/3) ([MirahImage](https://github.com/MirahImage)) -- Add PR github action. [\#2](https://github.com/rabbitmq/amqp091-go/pull/2) ([MirahImage](https://github.com/MirahImage)) -- Update Copyright Statement [\#1](https://github.com/rabbitmq/amqp091-go/pull/1) ([rlewis24](https://github.com/rlewis24)) - - - -\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)* +- Fix AMQPLAIN authentication mechanism + [\#16](https://github.com/rabbitmq/amqp091-go/pull/16) + ([hodbn](https://github.com/hodbn)) +- connection: clarify documented behavior of NotifyClose + [\#13](https://github.com/rabbitmq/amqp091-go/pull/13) + ([pabigot](https://github.com/pabigot)) +- Add a link to pkg.go.dev API docs + [\#9](https://github.com/rabbitmq/amqp091-go/pull/9) + ([benmoss](https://github.com/benmoss)) +- add test go version 1.16.x and 1.17.x + [\#8](https://github.com/rabbitmq/amqp091-go/pull/8) + ([k4n4ry](https://github.com/k4n4ry)) +- fix typos [\#6](https://github.com/rabbitmq/amqp091-go/pull/6) + ([h44z](https://github.com/h44z)) +- Heartbeat interval should be timeout/2 + [\#5](https://github.com/rabbitmq/amqp091-go/pull/5) + ([ifo20](https://github.com/ifo20)) +- Exporting Channel State [\#4](https://github.com/rabbitmq/amqp091-go/pull/4) + ([eibrunorodrigues](https://github.com/eibrunorodrigues)) +- Add codeql analysis [\#3](https://github.com/rabbitmq/amqp091-go/pull/3) + ([MirahImage](https://github.com/MirahImage)) +- Add PR github action. [\#2](https://github.com/rabbitmq/amqp091-go/pull/2) + ([MirahImage](https://github.com/MirahImage)) +- Update Copyright Statement + [\#1](https://github.com/rabbitmq/amqp091-go/pull/1) + ([rlewis24](https://github.com/rlewis24)) + +\* _This Changelog was automatically generated by +[github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)_ diff --git a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CODE_OF_CONDUCT.md b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CODE_OF_CONDUCT.md index 24b5675..f195140 100644 --- a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CODE_OF_CONDUCT.md +++ b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CODE_OF_CONDUCT.md @@ -3,32 +3,33 @@ ## Our Pledge In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to making participation in RabbitMQ Operator project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. +contributors and maintainers pledge to making participation in RabbitMQ Operator +project and our community a harassment-free experience for everyone, regardless +of age, body size, disability, ethnicity, sex characteristics, gender identity +and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, religion, or sexual identity and +orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members +- Using welcoming and inclusive language +- Being respectful of differing viewpoints and experiences +- Gracefully accepting constructive criticism +- Focusing on what is best for the community +- Showing empathy towards other community members Examples of unacceptable behavior by participants include: -* The use of sexualized language or imagery and unwelcome sexual attention or +- The use of sexualized language or imagery and unwelcome sexual attention or advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic +- Trolling, insulting/derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or electronic address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a +- Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities @@ -37,11 +38,11 @@ Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. +Project maintainers have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, or to ban temporarily or permanently any +contributor for other behaviors that they deem inappropriate, threatening, +offensive, or harmful. ## Scope @@ -55,11 +56,11 @@ further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at oss-coc@vmware.com. All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. +reported by contacting the project team at oss-coc@vmware.com. All complaints +will be reviewed and investigated and will result in a response that is deemed +necessary and appropriate to the circumstances. The project team is obligated to +maintain confidentiality with regard to the reporter of an incident. Further +details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other @@ -67,11 +68,11 @@ members of the project's leadership. ## Attribution -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 1.4, available at +https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq - diff --git a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CONTRIBUTING.md b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CONTRIBUTING.md index ec86fe5..c0cd3c5 100644 --- a/examples/go/vendor/github.com/rabbitmq/amqp091-go/CONTRIBUTING.md +++ b/examples/go/vendor/github.com/rabbitmq/amqp091-go/CONTRIBUTING.md @@ -18,8 +18,9 @@ Here is the recommended workflow: ## Running Static Checks -golangci-lint must be installed to run the static checks. See [installation -docs](https://golangci-lint.run/usage/install/) for more information. +golangci-lint must be installed to run the static checks. See +[installation docs](https://golangci-lint.run/usage/install/) for more +information. The static checks can be run via: @@ -33,11 +34,11 @@ make checks Running the Integration tests require: -* A running RabbitMQ node with all defaults: +- A running RabbitMQ node with all defaults: [https://www.rabbitmq.com/download.html](https://www.rabbitmq.com/download.html) -* That the server is either reachable via `amqp://guest:guest@127.0.0.1:5672/` - or the environment variable `AMQP_URL` set to it's URL - (e.g.: `export AMQP_URL="amqp://guest:verysecretpasswd@rabbitmq-host:5772/`) +- That the server is either reachable via `amqp://guest:guest@127.0.0.1:5672/` + or the environment variable `AMQP_URL` set to it's URL (e.g.: + `export AMQP_URL="amqp://guest:verysecretpasswd@rabbitmq-host:5772/`) The integration tests can be run via: @@ -46,7 +47,7 @@ make tests ``` Some tests require access to `rabbitmqctl` CLI. Use the environment variable -`RABBITMQ_RABBITMQCTL_PATH=/some/path/to/rabbitmqctl` to run those tests. +`RABBITMQ_RABBITMQCTL_PATH=/some/path/to/rabbitmqctl` to run those tests. If you have Docker available in your machine, you can run: @@ -54,9 +55,9 @@ If you have Docker available in your machine, you can run: make tests-docker ``` -This target will start a RabbitMQ container, run the test suite with the environment -variable setup, and stop RabbitMQ container after a successful run. +This target will start a RabbitMQ container, run the test suite with the +environment variable setup, and stop RabbitMQ container after a successful run. -All integration tests should use the `integrationConnection(...)` test -helpers defined in `integration_test.go` to setup the integration environment -and logging. +All integration tests should use the `integrationConnection(...)` test helpers +defined in `integration_test.go` to setup the integration environment and +logging. diff --git a/examples/go/vendor/github.com/rabbitmq/amqp091-go/README.md b/examples/go/vendor/github.com/rabbitmq/amqp091-go/README.md index 6d3143f..5924040 100644 --- a/examples/go/vendor/github.com/rabbitmq/amqp091-go/README.md +++ b/examples/go/vendor/github.com/rabbitmq/amqp091-go/README.md @@ -4,20 +4,21 @@ [![Go Reference](https://pkg.go.dev/badge/github.com/rabbitmq/amqp091-go.svg)](https://pkg.go.dev/github.com/rabbitmq/amqp091-go) [![Go Report Card](https://goreportcard.com/badge/github.com/rabbitmq/amqp091-go)](https://goreportcard.com/report/github.com/rabbitmq/amqp091-go) -This is a Go AMQP 0.9.1 client maintained by the [RabbitMQ core team](https://github.com/rabbitmq). -It was [originally developed by Sean Treadway](https://github.com/streadway/amqp). +This is a Go AMQP 0.9.1 client maintained by the +[RabbitMQ core team](https://github.com/rabbitmq). It was +[originally developed by Sean Treadway](https://github.com/streadway/amqp). ## Differences from streadway/amqp -Some things are different compared to the original client, -others haven't changed. +Some things are different compared to the original client, others haven't +changed. ### Package Name This library uses a different package name. If moving from `streadway/amqp`, using an alias may reduce the number of changes needed: -``` go +```go amqp "github.com/rabbitmq/amqp091-go" ``` @@ -27,26 +28,25 @@ This client uses the same 2-clause BSD license as the original project. ### Public API Evolution - This client retains key API elements as practically possible. - It is, however, open to reasonable breaking public API changes suggested by the community. - We don't have the "no breaking public API changes ever" rule and fully recognize - that a good client API evolves over time. - +This client retains key API elements as practically possible. It is, however, +open to reasonable breaking public API changes suggested by the community. We +don't have the "no breaking public API changes ever" rule and fully recognize +that a good client API evolves over time. ## Project Maturity -This project is based on a mature Go client that's been around for over a decade. - +This project is based on a mature Go client that's been around for over a +decade. ## Supported Go Versions This client supports two most recent Go release series. - ## Supported RabbitMQ Versions This project supports RabbitMQ versions starting with `2.0` but primarily tested -against [currently supported RabbitMQ release series](https://www.rabbitmq.com/versions.html). +against +[currently supported RabbitMQ release series](https://www.rabbitmq.com/versions.html). Some features and behaviours may be server version-specific. @@ -60,41 +60,41 @@ interact the semantics of the protocol. Things not intended to be supported. - * Auto reconnect and re-synchronization of client and server topologies. - * Reconnection would require understanding the error paths when the - topology cannot be declared on reconnect. This would require a new set - of types and code paths that are best suited at the call-site of this - package. AMQP has a dynamic topology that needs all peers to agree. If - this doesn't happen, the behavior is undefined. Instead of producing a - possible interface with undefined behavior, this package is designed to - be simple for the caller to implement the necessary connection-time - topology declaration so that reconnection is trivial and encapsulated in - the caller's application code. - * AMQP Protocol negotiation for forward or backward compatibility. - * 0.9.1 is stable and widely deployed. AMQP 1.0 is a divergent - specification (a different protocol) and belongs to a different library. - * Anything other than PLAIN and EXTERNAL authentication mechanisms. - * Keeping the mechanisms interface modular makes it possible to extend - outside of this package. If other mechanisms prove to be popular, then - we would accept patches to include them in this package. - * Support for [`basic.return` and `basic.ack` frame ordering](https://www.rabbitmq.com/confirms.html#when-publishes-are-confirmed). - This client uses Go channels for certain protocol events and ordering between - events sent to two different channels generally cannot be guaranteed. +- Auto reconnect and re-synchronization of client and server topologies. + - Reconnection would require understanding the error paths when the topology + cannot be declared on reconnect. This would require a new set of types and + code paths that are best suited at the call-site of this package. AMQP has a + dynamic topology that needs all peers to agree. If this doesn't happen, the + behavior is undefined. Instead of producing a possible interface with + undefined behavior, this package is designed to be simple for the caller to + implement the necessary connection-time topology declaration so that + reconnection is trivial and encapsulated in the caller's application code. +- AMQP Protocol negotiation for forward or backward compatibility. + - 0.9.1 is stable and widely deployed. AMQP 1.0 is a divergent specification + (a different protocol) and belongs to a different library. +- Anything other than PLAIN and EXTERNAL authentication mechanisms. + - Keeping the mechanisms interface modular makes it possible to extend outside + of this package. If other mechanisms prove to be popular, then we would + accept patches to include them in this package. +- Support for + [`basic.return` and `basic.ack` frame ordering](https://www.rabbitmq.com/confirms.html#when-publishes-are-confirmed). + This client uses Go channels for certain protocol events and ordering between + events sent to two different channels generally cannot be guaranteed. ## Usage -See the [_examples](_examples) subdirectory for simple producers and consumers executables. -If you have a use-case in mind which isn't well-represented by the examples, -please file an issue. +See the [\_examples](_examples) subdirectory for simple producers and consumers +executables. If you have a use-case in mind which isn't well-represented by the +examples, please file an issue. ## Documentation - * [Godoc API reference](http://godoc.org/github.com/rabbitmq/amqp091-go) - * [RabbitMQ tutorials in Go](https://github.com/rabbitmq/rabbitmq-tutorials/tree/master/go) +- [Godoc API reference](http://godoc.org/github.com/rabbitmq/amqp091-go) +- [RabbitMQ tutorials in Go](https://github.com/rabbitmq/rabbitmq-tutorials/tree/master/go) ## Contributing -Pull requests are very much welcomed. Create your pull request on a non-main +Pull requests are very much welcomed. Create your pull request on a non-main branch, make sure a test or example is included that covers your change, and your commits represent coherent changes that include a reason for the change. diff --git a/examples/go/vendor/github.com/rabbitmq/amqp091-go/RELEASE.md b/examples/go/vendor/github.com/rabbitmq/amqp091-go/RELEASE.md index 1378d68..647cc94 100644 --- a/examples/go/vendor/github.com/rabbitmq/amqp091-go/RELEASE.md +++ b/examples/go/vendor/github.com/rabbitmq/amqp091-go/RELEASE.md @@ -1,13 +1,20 @@ # Guide to release a new version -1. Update the `buildVersion` constant in [connection.go](https://github.com/rabbitmq/amqp091-go/blob/4886c35d10b273bd374e3ed2356144ad41d27940/connection.go#L31) -2. Commit and push. Include the version in the commit message e.g. [this commit](https://github.com/rabbitmq/amqp091-go/commit/52ce2efd03c53dcf77d5496977da46840e9abd24) -3. Create a new [GitHub Release](https://github.com/rabbitmq/amqp091-go/releases). Create a new tag as `v..` +1. Update the `buildVersion` constant in + [connection.go](https://github.com/rabbitmq/amqp091-go/blob/4886c35d10b273bd374e3ed2356144ad41d27940/connection.go#L31) +2. Commit and push. Include the version in the commit message e.g. + [this commit](https://github.com/rabbitmq/amqp091-go/commit/52ce2efd03c53dcf77d5496977da46840e9abd24) +3. Create a new + [GitHub Release](https://github.com/rabbitmq/amqp091-go/releases). Create a + new tag as `v..` 1. Use auto-generate release notes feature in GitHub 4. Generate the change log, see [Changelog Generation](#changelog-generation) -5. Review the changelog. Watch out for issues closed as "not-fixed" or without a PR -6. Commit and Push. Pro-tip: include `[skip ci]` in the commit message to skip the CI run, since it's only documentation -7. Send an announcement to the mailing list. Take inspiration from [this message](https://groups.google.com/g/rabbitmq-users/c/EBGYGOWiSgs/m/0sSFuAGICwAJ) +5. Review the changelog. Watch out for issues closed as "not-fixed" or without a + PR +6. Commit and Push. Pro-tip: include `[skip ci]` in the commit message to skip + the CI run, since it's only documentation +7. Send an announcement to the mailing list. Take inspiration from + [this message](https://groups.google.com/g/rabbitmq-users/c/EBGYGOWiSgs/m/0sSFuAGICwAJ) ## Changelog Generation diff --git a/examples/go/vendor/github.com/spaolacci/murmur3/README.md b/examples/go/vendor/github.com/spaolacci/murmur3/README.md index e463678..703b850 100644 --- a/examples/go/vendor/github.com/spaolacci/murmur3/README.md +++ b/examples/go/vendor/github.com/spaolacci/murmur3/README.md @@ -1,5 +1,4 @@ -murmur3 -======= +# murmur3 [![Build Status](https://travis-ci.org/spaolacci/murmur3.svg?branch=master)](https://travis-ci.org/spaolacci/murmur3) @@ -9,12 +8,10 @@ MurmurHash3). Reference algorithm has been slightly hacked as to support the streaming mode required by Go's standard [Hash interface](http://golang.org/pkg/hash/#Hash). +## Benchmarks -Benchmarks ----------- - -Go tip as of 2014-06-12 (i.e almost go1.3), core i7 @ 3.4 Ghz. All runs -include hasher instantiation and sequence finalization. +Go tip as of 2014-06-12 (i.e almost go1.3), core i7 @ 3.4 Ghz. All runs include +hasher instantiation and sequence finalization.
 
@@ -49,7 +46,6 @@ Benchmark128_8192      1000000     1838 ns/op     4455.47 MB/s
 
 
-
 
 benchmark              Go1.0 MB/s    Go1.1 MB/s  speedup    Go1.2 MB/s  speedup    Go1.3 MB/s  speedup
@@ -83,4 +79,3 @@ Benchmark128_4096         2360.15       4299.09    1.82x       4392.35    1.02x
 Benchmark128_8192         2411.50       4356.84    1.81x       4480.68    1.03x       4455.47    0.99x
 
 
-