diff --git a/docs/README.md b/docs/README.md
index 94911f27cee5..d64249156c96 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -17,6 +17,7 @@
* [Gardener Admission Controller](concepts/admission-controller.md)
* [Gardener Resource Manager](concepts/resource-manager.md)
* [Gardener Operator](concepts/operator.md)
+ * [Gardener Node Agent](concepts/node-agent.md)
* [Gardenlet](concepts/gardenlet.md)
* [Backup Restore](concepts/backup-restore.md)
* [etcd](concepts/etcd.md)
diff --git a/docs/concepts/images/gardener-nodeagent-architecture.drawio.svg b/docs/concepts/images/gardener-nodeagent-architecture.drawio.svg
new file mode 100644
index 000000000000..9f77f9bd4ecf
--- /dev/null
+++ b/docs/concepts/images/gardener-nodeagent-architecture.drawio.svg
@@ -0,0 +1,312 @@
+
\ No newline at end of file
diff --git a/docs/concepts/node-agent.md b/docs/concepts/node-agent.md
new file mode 100644
index 000000000000..c7b6efd6506d
--- /dev/null
+++ b/docs/concepts/node-agent.md
@@ -0,0 +1,59 @@
+# Gardener Node Agent
+
+The goal of the `gardener-node-agent` is to bootstrap a machine into a worker node and maintain node-specific components, which run on the node and are unmanaged by Kubernetes (e.g. the controller runtime, the kubelet service, ...).
+
+It effectively is a Kubernetes controller deployed onto the worker node.
+
+## Basic Design
+
+In this section it is described how the `gardener-node-agent` works, what its responsibilities are and how it is installed onto the worker node.
+
+To install the `gardener-node-agent` onto a worker node, there is a very small bash script called `gardener-node-init.sh`, which is installed on the node with cloud-init data. This script's sole purpose is downloading and starting the `gardener-node-agent`. The binary artifact is downloaded as an [OCI artifact](https://github.com/opencontainers/image-spec/blob/main/manifest.md), removing the `docker` dependency on a worker node. At the beginning, two architectures of the `gardener-node-agent` are supported: `amd64` and `x86`. In the same manner, the kubelet has to be provided as an OCI artifact.
+
+Along with the init script, a configuration for the `gardener-node-agent` is carried onto the worker node at `/etc/gardener/node-agent.config`. This configuration contains things like the shoot's kube-apiserver endpoint, the according certificates to communicate with it, the bootstrap token for the kubelet, and so on.
+
+In a bootstrapping phase, the `gardener-node-agent` sets itself up as a systemd service. It also executes tasks that need to be executed before any other components are installed, e.g. formatting the data device for the kubelet.
+
+After the bootstrap phase, the `gardener-node-agent` runs a systemd service watching on secret resources located in the `kube-system` namespace. There is a secret resource that contains the `OperatingSystemConfig` to reconcile. The OSC secret exists for every worker group of the shoot cluster and is named accordingly. Applying the OSC finally installs the kubelet + configuration on the worker node.
+
+## Architecture
+
+![Design](./images/gardener-nodeagent-architecture.drawio.svg)
+
+This figure visualizes the overall architecture of the `gardener-node-agent`. It starts with the downloader OSC being transferred through the userdata to a machine through the machine-controller-manager (MCM). The bootstrap phase of the `gardener-node-agent` will then happen as described in the previous section.
+
+## Reasoning
+
+The `gardener-node-agent` is a replacement for what was called the `cloud-config-downloader` and the `cloud-config-executor`, both written in `bash`. The `gardener-node-agent` gets rid of the sheer complexity of these two scripts, combined with scalability and performance issues urges their removal.
+
+With the new Architecture we gain a lot, let's describe the most important gains here.
+
+### Developer Productivity
+
+Because we all develop in go day by day, writing business logic in `bash` is difficult, hard to maintain, almost impossible to test. Getting rid of almost all `bash` scripts which are currently in use for this very important part of the cluster creation process will enhance the speed of adding new features and removing bugs.
+
+### Speed
+
+Until now, the `cloud-config-downloader` runs in a loop every 60sec to check if something changed on the shoot which requires modifications on the worker node. This produces a lot of unneeded traffic on the api-server and wastes time, it will sometimes take up to 60sec until a desired modification is started on the worker node.
+By using the controller-runtime we can watch for the `node`, the`OSC` in the `secret`, and the shoot-access-token in the `secret`. If any of these object changed, and only then, the required action will take effect immediately.
+This will speed up operations and will reduce the load on the api-server of the shoot dramatically.
+
+## Scalability
+
+Actually the `cloud-config-downloader` add a random wait time before restarting the `kubelet` in case the `kubelet` was updated or a configuration change was made to it. This is required to reduce the load on the API server and the traffic on the internet uplink. It also reduces the overall downtime of the services in the cluster because every `kubelet` restart takes a node for several seconds into `NotReady` state which eventually interrupts service availability.
+
+```
+TODO: The `gardener-node-agent` could do this in a much intelligent way because it watches the `node` object. The gardenlet could add some annotation which tells the `gardener-node-agent` to wait for the kubelet in a coordinated manner. The coordination could be in chunks of nodes and wait for them to finish and then start with the next chunk. Also a equal time spread is possible.
+```
+
+Decision was made to keep the existing jitter mechanism which calculates the kubelet-download-and-restart-delay-seconds on the controller itself.
+
+### Correctness
+
+The configuration of the `cloud-config-downloader` is actually done by placing a file for every configuration item on the disk on the worker node. This was done because parsing the content of a single file and using this as a value in `bash` reduces to something like `VALUE=$(cat /the/path/to/the/file)`. Simple but lacks validation, type safety and whatnot.
+With the `gardener-node-agent` we introduce a new API which is then stored in the `gardener-node-agent` `secret` and stored on disc in a single yaml file for comparison with the previous known state. This brings all benefits of type safe configuration.
+Because actual and previous configuration are compared, removed files and units are also removed and stopped on the worker if removed from the `OSC`.
+
+### Availability
+
+Previously the `cloud-config-downloader` simply restarted the `systemd-units` on every change to the `OSC`, regardless which of the services changed. The `gardener-node-agent` first checks which systemd-unit was changed, and will only restart these. This will remove unneeded `kubelet` restarts.
diff --git a/go.mod b/go.mod
index 22a6bfe433d3..ec459e011185 100644
--- a/go.mod
+++ b/go.mod
@@ -27,7 +27,7 @@ require (
github.com/onsi/gomega v1.27.6
github.com/prometheus/client_golang v1.14.0
github.com/robfig/cron v1.2.0
- github.com/spf13/cobra v1.6.1
+ github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.11.0
github.com/texttheater/golang-levenshtein v1.0.1
@@ -35,12 +35,12 @@ require (
go.uber.org/goleak v1.2.0
go.uber.org/zap v1.24.0
golang.org/x/crypto v0.6.0
- golang.org/x/text v0.8.0
+ golang.org/x/text v0.9.0
golang.org/x/time v0.3.0
- golang.org/x/tools v0.7.0
+ golang.org/x/tools v0.8.0
gomodules.xyz/jsonpatch/v2 v2.2.0
gonum.org/v1/gonum v0.12.0
- google.golang.org/protobuf v1.28.1
+ google.golang.org/protobuf v1.30.0
istio.io/api v0.0.0-20230217221049-9d422bf48675
istio.io/client-go v1.17.1
k8s.io/api v0.26.3
@@ -69,7 +69,13 @@ require (
)
require (
- github.com/BurntSushi/toml v1.0.0 // indirect
+ github.com/google/go-containerregistry v0.15.2
+ github.com/spf13/afero v1.8.2
+ golang.org/x/exp v0.0.0-20230213192124-5e25df0256eb
+)
+
+require (
+ github.com/BurntSushi/toml v1.2.1 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 // indirect
@@ -77,9 +83,14 @@ require (
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.1.3 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
+ github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/cyphar/filepath-securejoin v0.2.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
+ github.com/docker/cli v23.0.5+incompatible // indirect
+ github.com/docker/distribution v2.8.1+incompatible // indirect
+ github.com/docker/docker v23.0.5+incompatible // indirect
+ github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/dsnet/compress v0.0.1 // indirect
github.com/elazarl/goproxy v0.0.0-20191011121108-aa519ddbe484 // indirect
github.com/emicklei/go-restful/v3 v3.10.1 // indirect
@@ -99,6 +110,7 @@ require (
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/gobuffalo/flect v0.3.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
+ github.com/godbus/dbus/v5 v5.0.4 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.4 // indirect
@@ -112,15 +124,17 @@ require (
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
- github.com/inconshreveable/mousetrap v1.0.1 // indirect
+ github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
+ github.com/klauspost/compress v1.16.5 // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
+ github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/spdystream v0.2.0 // indirect
@@ -128,6 +142,8 @@ require (
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nwaples/rardecode v1.1.2 // indirect
+ github.com/opencontainers/go-digest v1.0.0 // indirect
+ github.com/opencontainers/image-spec v1.1.0-rc3 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/pelletier/go-toml/v2 v2.0.0-beta.8 // indirect
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
@@ -136,12 +152,13 @@ require (
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
- github.com/spf13/afero v1.8.2 // indirect
+ github.com/sirupsen/logrus v1.9.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/ulikunitz/xz v0.5.10 // indirect
+ github.com/vbatts/tar-split v0.11.3 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
go.etcd.io/etcd/api/v3 v3.5.5 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.5 // indirect
@@ -158,12 +175,12 @@ require (
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.7.0 // indirect
- golang.org/x/mod v0.9.0 // indirect
- golang.org/x/net v0.8.0 // indirect
- golang.org/x/oauth2 v0.4.0 // indirect
+ golang.org/x/mod v0.10.0 // indirect
+ golang.org/x/net v0.9.0 // indirect
+ golang.org/x/oauth2 v0.7.0 // indirect
golang.org/x/sync v0.1.0 // indirect
- golang.org/x/sys v0.6.0 // indirect
- golang.org/x/term v0.6.0 // indirect
+ golang.org/x/sys v0.7.0 // indirect
+ golang.org/x/term v0.7.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect
google.golang.org/grpc v1.53.0 // indirect
diff --git a/go.sum b/go.sum
index a27205645e6c..205001ec61b7 100644
--- a/go.sum
+++ b/go.sum
@@ -24,7 +24,7 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
-cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE=
+cloud.google.com/go/compute v1.19.1 h1:am86mquDUgjGNWxiGn+5PGLbmgiWXlE/yNWpIpNvuXY=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
@@ -40,8 +40,8 @@ cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
-github.com/BurntSushi/toml v1.0.0 h1:dtDWrepsVPfW9H/4y7dDgFc2MBUSeJhlaDtK13CxFlU=
-github.com/BurntSushi/toml v1.0.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
+github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
+github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
@@ -101,6 +101,8 @@ github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWH
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/containerd/stargz-snapshotter/estargz v0.14.3 h1:OqlDCK3ZVUO6C3B/5FSkDwbkEETK84kQgEeFwDC+62k=
+github.com/containerd/stargz-snapshotter/estargz v0.14.3/go.mod h1:KY//uOCIkSuNAHhJogcZtrNHdKrA99/FCCRjE3HD36o=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
@@ -112,6 +114,14 @@ github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/docker/cli v23.0.5+incompatible h1:ufWmAOuD3Vmr7JP2G5K3cyuNC4YZWiAsuDEvFVVDafE=
+github.com/docker/cli v23.0.5+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
+github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
+github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/docker v23.0.5+incompatible h1:DaxtlTJjFSnLOXVNUBU1+6kXGz2lpDoEAH6QoxaSg8k=
+github.com/docker/docker v23.0.5+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
+github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dsnet/compress v0.0.1 h1:PlZu0n3Tuv04TzpfPbrnI0HW/YwodEXDS+oPKahKF0Q=
@@ -221,6 +231,7 @@ github.com/gobuffalo/flect v0.3.0 h1:erfPWM+K1rFNIQeRPdeEXxo8yFr/PO17lhRnS8FUrtk
github.com/gobuffalo/flect v0.3.0/go.mod h1:5pf3aGnsvqvCj50AVni7mJJF8ICxGZ8HomberC3pXLE=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
+github.com/godbus/dbus/v5 v5.0.4 h1:9349emZab16e7zQvpmsbtjc18ykshndd8y2PG3sgJbA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
@@ -287,6 +298,8 @@ github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-containerregistry v0.15.2 h1:MMkSh+tjSdnmJZO7ljvEqV1DjfekB6VUEAZgy3a+TQE=
+github.com/google/go-containerregistry v0.15.2/go.mod h1:wWK+LnOv4jXMM23IT/F1wdYftGWGr47Is8CG+pmHK1Q=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -343,8 +356,8 @@ github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
-github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc=
-github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
+github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
+github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
@@ -364,6 +377,8 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.4.1/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
+github.com/klauspost/compress v1.16.5 h1:IFV2oUNUzZaz+XyusxpLzpzS8Pt5rh0Z16For/djlyI=
+github.com/klauspost/compress v1.16.5/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -401,6 +416,8 @@ github.com/mholt/archiver v3.1.1+incompatible h1:1dCVxuqs0dJseYEhi5pl7MYPH9zDa1w
github.com/mholt/archiver v3.1.1+incompatible/go.mod h1:Dh2dOXnSdiLxRiPoVfIr/fI1TwETms9B8CTWfeh7ROU=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
+github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
+github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4=
github.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
@@ -423,7 +440,6 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8m
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
-github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nwaples/rardecode v1.1.2 h1:Cj0yZY6T1Zx1R7AhTbyGSALm44/Mmq+BAPc4B/p/d3M=
github.com/nwaples/rardecode v1.1.2/go.mod h1:5DzqNKiOdpKKBH87u8VlvAnPZMXcGRhxWkRpHbbfGS0=
@@ -455,6 +471,10 @@ github.com/onsi/gomega v1.22.1/go.mod h1:x6n7VNe4hw0vkyYUM4mjIXx3JbLiPaBPNgB7PRQ
github.com/onsi/gomega v1.23.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2vQAg=
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
+github.com/opencontainers/image-spec v1.1.0-rc3 h1:fzg1mXZFj8YdPeNkRXMg+zb88BFV0Ys52cJydRwBkb8=
+github.com/opencontainers/image-spec v1.1.0-rc3/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
github.com/pelletier/go-toml v1.9.4 h1:tjENF6MfZAg8e4ZmZTeWaWiT2vXtsoO6+iuOjFhECwM=
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pelletier/go-toml/v2 v2.0.0-beta.8 h1:dy81yyLYJDwMTifq24Oi/IslOslRrDSb3jwDggjz3Z0=
@@ -513,7 +533,8 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
-github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
+github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
+github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
@@ -521,8 +542,8 @@ github.com/spf13/afero v1.8.2 h1:xehSyVa0YnHWsJ49JFljMpg1HX19V6NDZ1fkm1Xznbo=
github.com/spf13/afero v1.8.2/go.mod h1:CtAatgMJh6bJEIs48Ay/FOnkljP3WeGUG0MC1RfAqwo=
github.com/spf13/cast v1.4.1 h1:s0hze+J0196ZfEMTs80N7UlFt0BDuQ7Q+JDnHiMWKdA=
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
-github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
-github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
+github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
+github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
@@ -536,6 +557,7 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -543,8 +565,9 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
-github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
+github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/texttheater/golang-levenshtein v1.0.1 h1:+cRNoVrfiwufQPhoMzB6N0Yf/Mqajr6t1lOv8GyGE2U=
@@ -553,6 +576,9 @@ github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802 h1:uruHq4
github.com/ulikunitz/xz v0.5.6/go.mod h1:2bypXElzHzzJZwzH67Y6wb67pO62Rzfn7BSiF4ABRW8=
github.com/ulikunitz/xz v0.5.10 h1:t92gobL9l3HE202wg3rlk19F6X+JOxl9BBrCCMYEYd8=
github.com/ulikunitz/xz v0.5.10/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14=
+github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
+github.com/vbatts/tar-split v0.11.3 h1:hLFqsOLQ1SsppQNTMpkpPXClLDfC2A3Zgy9OUU+RVck=
+github.com/vbatts/tar-split v0.11.3/go.mod h1:9QlHN18E+fEH7RdG+QAJJcuya3rqT7eXSTY7wGrAokY=
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 h1:nIPpBwaJSVYIxUFsDv3M8ofmx9yWTog9BfvIu0q41lo=
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
@@ -642,6 +668,7 @@ golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230213192124-5e25df0256eb h1:PaBZQdo+iSDyHT053FjUCgZQ/9uqVwPOcl7KSWhKn6w=
+golang.org/x/exp v0.0.0-20230213192124-5e25df0256eb/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -671,8 +698,8 @@ golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
-golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
-golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
+golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -721,8 +748,8 @@ golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
-golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ=
-golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
+golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM=
+golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -735,8 +762,8 @@ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/oauth2 v0.7.0 h1:qe6s0zUXlPX80/dITx3440hWZ7GwMwgDDyrSGTPJG/g=
+golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -815,18 +842,20 @@ golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220906165534-d0df966e6959/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
-golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
+golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
-golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw=
-golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
+golang.org/x/term v0.7.0 h1:BEvjmm5fURWqcfbSKTdpkDXYBrUS1c0m8agp14W48vQ=
+golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -839,8 +868,8 @@ golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
-golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
-golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
+golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -910,8 +939,8 @@ golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
-golang.org/x/tools v0.7.0 h1:W4OVu8VVOaIO0yzWMNdepAulS7YfoS3Zabrm8DOXXU4=
-golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
+golang.org/x/tools v0.8.0 h1:vSDcovVPld282ceKgDimkRSC8kpaH1dgyc9UMzlt84Y=
+golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1027,14 +1056,15 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
-google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
+google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
-gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
@@ -1059,6 +1089,7 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
diff --git a/pkg/nodeagent/apis/config/doc.go b/pkg/nodeagent/apis/config/doc.go
new file mode 100644
index 000000000000..dcae028618b4
--- /dev/null
+++ b/pkg/nodeagent/apis/config/doc.go
@@ -0,0 +1,18 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +k8s:deepcopy-gen=package
+// +groupName=nodeagent.config.gardener.cloud
+
+package config // import "github.com/gardener/gardener/pkg/nodeagent/apis/config"
diff --git a/pkg/nodeagent/apis/config/register.go b/pkg/nodeagent/apis/config/register.go
new file mode 100644
index 000000000000..73f7bd6d97cc
--- /dev/null
+++ b/pkg/nodeagent/apis/config/register.go
@@ -0,0 +1,51 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+)
+
+// GroupName is the group name used in this package.
+const GroupName = "nodeagent.config.gardener.cloud"
+
+// SchemeGroupVersion is group version used to register these objects.
+var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal}
+
+// Kind takes an unqualified kind and returns a Group qualified GroupKind
+func Kind(kind string) schema.GroupKind {
+ return SchemeGroupVersion.WithKind(kind).GroupKind()
+}
+
+// Resource takes an unqualified resource and returns a Group qualified GroupResource
+func Resource(resource string) schema.GroupResource {
+ return SchemeGroupVersion.WithResource(resource).GroupResource()
+}
+
+var (
+ // SchemeBuilder used to register the Shoot resource.
+ SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
+ // AddToScheme is a pointer to SchemeBuilder.AddToScheme.
+ AddToScheme = SchemeBuilder.AddToScheme
+)
+
+// Adds the list of known types to api.Scheme.
+func addKnownTypes(scheme *runtime.Scheme) error {
+ scheme.AddKnownTypes(SchemeGroupVersion,
+ &NodeAgentConfiguration{},
+ )
+ return nil
+}
diff --git a/pkg/nodeagent/apis/config/types.go b/pkg/nodeagent/apis/config/types.go
new file mode 100644
index 000000000000..9fd2cb4664fe
--- /dev/null
+++ b/pkg/nodeagent/apis/config/types.go
@@ -0,0 +1,61 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// NodeAgentConfiguration defines the configuration for the gardener-node-agent.
+type NodeAgentConfiguration struct {
+ metav1.TypeMeta
+
+ // APIServer contains the connection configuration for the gardener-node-agent to
+ // access the shoot api server.
+ APIServer APIServer
+
+ // SecretName defines the name of the secret in the shoot cluster, which contains
+ // the OSC for the gardener-node-agent.
+ OSCSecretName string
+
+ // TokenSecretName defines the name of the secret in the shoot cluster, which contains
+ // the projected shoot access token for the gardener-node-agent.
+ TokenSecretName string
+
+ // Image is the docker image reference to the gardener-node-agent.
+ Image string
+ // HyperkubeImage is the docker image reference to the hyperkube containing kubelet.
+ HyperkubeImage string
+
+ // KubernetesVersion contains the kubernetes version of the kubelet, used for annotating
+ // the corresponding node resource with a kubernetes version annotation.
+ KubernetesVersion string
+
+ // KubeletDataVolumeSize sets the data volume size of an unformatted disk on the worker node,
+ // which is the be used for /var/lib on the worker.
+ KubeletDataVolumeSize *int64
+}
+
+type APIServer struct {
+ // URL is the url to the api server.
+ URL string
+ // CA is the ca certificate for the api server.
+ CA string
+ // BootstrapToken is the initial token to fetch the projected shoot access token for
+ // kubelet and the gardener-node-agent.
+ BootstrapToken string
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/defaults.go b/pkg/nodeagent/apis/config/v1alpha1/defaults.go
new file mode 100644
index 000000000000..c5383e1246a2
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/defaults.go
@@ -0,0 +1,23 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1alpha1
+
+import (
+ "k8s.io/apimachinery/pkg/runtime"
+)
+
+func addDefaultingFuncs(scheme *runtime.Scheme) error {
+ return RegisterDefaults(scheme)
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/defaults_test.go b/pkg/nodeagent/apis/config/v1alpha1/defaults_test.go
new file mode 100644
index 000000000000..5a5151e63120
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/defaults_test.go
@@ -0,0 +1,15 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1alpha1_test
diff --git a/pkg/nodeagent/apis/config/v1alpha1/doc.go b/pkg/nodeagent/apis/config/v1alpha1/doc.go
new file mode 100644
index 000000000000..83d67799c1c7
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/doc.go
@@ -0,0 +1,20 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +k8s:deepcopy-gen=package
+// +k8s:conversion-gen=github.com/gardener/gardener/pkg/nodeagent/apis/config
+// +k8s:openapi-gen=true
+// +k8s:defaulter-gen=TypeMeta
+
+package v1alpha1 // import "github.com/gardener/gardener/pkg/nodeagent/apis/config/v1alpha1"
diff --git a/pkg/nodeagent/apis/config/v1alpha1/register.go b/pkg/nodeagent/apis/config/v1alpha1/register.go
new file mode 100644
index 000000000000..fa48003c3fd7
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/register.go
@@ -0,0 +1,54 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1alpha1
+
+import (
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+)
+
+// GroupName is the group name used in this package.
+const GroupName = "nodeagent.config.gardener.cloud"
+
+// SchemeGroupVersion is group version used to register these objects
+var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1alpha1"}
+
+// Resource takes an unqualified resource and returns a Group qualified GroupResource
+func Resource(resource string) schema.GroupResource {
+ return SchemeGroupVersion.WithResource(resource).GroupResource()
+}
+
+var (
+ // SchemeBuilder used to register the Shoot resource.
+ SchemeBuilder runtime.SchemeBuilder
+ localSchemeBuilder = &SchemeBuilder
+ // AddToScheme is a pointer to SchemeBuilder.AddToScheme.
+ AddToScheme = localSchemeBuilder.AddToScheme
+)
+
+func init() {
+ // We only register manually written functions here. The registration of the
+ // generated functions takes place in the generated files. The separation
+ // makes the code compile even when the generated files are missing.
+ localSchemeBuilder.Register(addDefaultingFuncs, addKnownTypes)
+}
+
+// Adds the list of known types to api.Scheme.
+func addKnownTypes(scheme *runtime.Scheme) error {
+ scheme.AddKnownTypes(SchemeGroupVersion,
+ &NodeAgentConfiguration{},
+ )
+ return nil
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/types.go b/pkg/nodeagent/apis/config/v1alpha1/types.go
new file mode 100644
index 000000000000..5ce5fa44d224
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/types.go
@@ -0,0 +1,95 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1alpha1
+
+import (
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+const (
+ // NodeAgentBaseDir is the directory on the worker node that contains gardener-node-agent
+ // relevant files.
+ NodeAgentBaseDir = "/var/lib/gardener-node-agent"
+ // NodeAgentConfigPath is the file path on the worker node that contains the configuration
+ // of the gardener-node-agent.
+ NodeAgentConfigPath = NodeAgentBaseDir + "/configuration.yaml"
+ // NodeAgentInitScriptPath is the file path on the worker node that contains the init script
+ // of the gardener-node-agent.
+ NodeAgentInitScriptPath = NodeAgentBaseDir + "/gardener-node-init.sh"
+
+ // NodeInitUnitName is the name of the gardener-node-init systemd service.
+ NodeInitUnitName = "gardener-node-init.service"
+ // NodeAgentUnitName is the name of the gardener-node-agent systemd service.
+ NodeAgentUnitName = "gardener-node-agent.service"
+
+ // NodeAgentOSCSecretKey is the key inside the gardener-node-agent osc secret to access
+ // the encoded osc.
+ NodeAgentOSCSecretKey = "gardener-node-agent"
+ // NodeAgentOSCOldConfigPath is the file path on the worker node that contains the
+ // previous content of the osc
+ NodeAgentOSCOldConfigPath = NodeAgentBaseDir + "/previous-osc.yaml"
+
+ // NodeAgentTokenFilePath is the file path on the worker node that contains the shoot access
+ // token of the gardener-node-agent.
+ NodeAgentTokenFilePath = NodeAgentBaseDir + "/token"
+ // NodeAgentTokenSecretName is the name of the secret that contains the shoot access
+ // token of the gardener-node-agent.
+ NodeAgentTokenSecretName = "gardener-node-agent"
+ // NodeAgentTokenSecretKey is the key inside the gardener-node-agent token secret to access
+ // the token.
+ NodeAgentTokenSecretKey = "token"
+)
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// NodeAgentConfiguration defines the configuration for the gardener-node-agent.
+type NodeAgentConfiguration struct {
+ metav1.TypeMeta
+
+ // APIServer contains the connection configuration for the gardener-node-agent to
+ // access the shoot api server.
+ APIServer APIServer `json:"apiServer"`
+
+ // SecretName defines the name of the secret in the shoot cluster, which contains
+ // the OSC for the gardener-node-agent.
+ OSCSecretName string `json:"oscSecretName"`
+ // TokenSecretName defines the name of the secret in the shoot cluster, which contains
+ // the projected shoot access token for the gardener-node-agent.
+ TokenSecretName string `json:"tokenSecretName"`
+
+ // Image is the docker image reference to the gardener-node-agent.
+ Image string `json:"image"`
+ // HyperkubeImage is the docker image reference to the hyperkube containing kubelet.
+ HyperkubeImage string `json:"hyperkubeImage"`
+
+ // KubernetesVersion contains the kubernetes version of the kubelet, used for annotating
+ // the corresponding node resource with a kubernetes version annotation.
+ KubernetesVersion string `json:"kubernetesVersion"`
+
+ // KubeletDataVolumeSize sets the data volume size of an unformatted disk on the worker node,
+ // which is the be used for /var/lib on the worker.
+ // +optional
+ KubeletDataVolumeSize *int64 `json:"KubeletDataVolumeSize,omitempty"`
+}
+
+type APIServer struct {
+ // URL is the url to the api server.
+ URL string `json:"url"`
+ // CA is the ca certificate for the api server.
+ CA string `json:"ca"`
+ // BootstrapToken is the initial token to fetch the projected shoot access token for
+ // kubelet and the gardener-node-agent.
+ BootstrapToken string `json:"bootstrapToken"`
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/v1alpha1_suite_test.go b/pkg/nodeagent/apis/config/v1alpha1/v1alpha1_suite_test.go
new file mode 100644
index 000000000000..2d37e7b3b4fa
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/v1alpha1_suite_test.go
@@ -0,0 +1,27 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1alpha1_test
+
+import (
+ "testing"
+
+ . "github.com/onsi/ginkgo/v2"
+ . "github.com/onsi/gomega"
+)
+
+func TestV1alpha1(t *testing.T) {
+ RegisterFailHandler(Fail)
+ RunSpecs(t, "NodeAgent APIs Config V1alpha1 Suite")
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/zz_generated.conversion.go b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.conversion.go
new file mode 100644
index 000000000000..06e844076ed0
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.conversion.go
@@ -0,0 +1,120 @@
+//go:build !ignore_autogenerated
+// +build !ignore_autogenerated
+
+/*
+Copyright SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by conversion-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ unsafe "unsafe"
+
+ config "github.com/gardener/gardener/pkg/nodeagent/apis/config"
+ conversion "k8s.io/apimachinery/pkg/conversion"
+ runtime "k8s.io/apimachinery/pkg/runtime"
+)
+
+func init() {
+ localSchemeBuilder.Register(RegisterConversions)
+}
+
+// RegisterConversions adds conversion functions to the given scheme.
+// Public to allow building arbitrary schemes.
+func RegisterConversions(s *runtime.Scheme) error {
+ if err := s.AddGeneratedConversionFunc((*APIServer)(nil), (*config.APIServer)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1alpha1_APIServer_To_config_APIServer(a.(*APIServer), b.(*config.APIServer), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*config.APIServer)(nil), (*APIServer)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_config_APIServer_To_v1alpha1_APIServer(a.(*config.APIServer), b.(*APIServer), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*NodeAgentConfiguration)(nil), (*config.NodeAgentConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1alpha1_NodeAgentConfiguration_To_config_NodeAgentConfiguration(a.(*NodeAgentConfiguration), b.(*config.NodeAgentConfiguration), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*config.NodeAgentConfiguration)(nil), (*NodeAgentConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_config_NodeAgentConfiguration_To_v1alpha1_NodeAgentConfiguration(a.(*config.NodeAgentConfiguration), b.(*NodeAgentConfiguration), scope)
+ }); err != nil {
+ return err
+ }
+ return nil
+}
+
+func autoConvert_v1alpha1_APIServer_To_config_APIServer(in *APIServer, out *config.APIServer, s conversion.Scope) error {
+ out.URL = in.URL
+ out.CA = in.CA
+ out.BootstrapToken = in.BootstrapToken
+ return nil
+}
+
+// Convert_v1alpha1_APIServer_To_config_APIServer is an autogenerated conversion function.
+func Convert_v1alpha1_APIServer_To_config_APIServer(in *APIServer, out *config.APIServer, s conversion.Scope) error {
+ return autoConvert_v1alpha1_APIServer_To_config_APIServer(in, out, s)
+}
+
+func autoConvert_config_APIServer_To_v1alpha1_APIServer(in *config.APIServer, out *APIServer, s conversion.Scope) error {
+ out.URL = in.URL
+ out.CA = in.CA
+ out.BootstrapToken = in.BootstrapToken
+ return nil
+}
+
+// Convert_config_APIServer_To_v1alpha1_APIServer is an autogenerated conversion function.
+func Convert_config_APIServer_To_v1alpha1_APIServer(in *config.APIServer, out *APIServer, s conversion.Scope) error {
+ return autoConvert_config_APIServer_To_v1alpha1_APIServer(in, out, s)
+}
+
+func autoConvert_v1alpha1_NodeAgentConfiguration_To_config_NodeAgentConfiguration(in *NodeAgentConfiguration, out *config.NodeAgentConfiguration, s conversion.Scope) error {
+ if err := Convert_v1alpha1_APIServer_To_config_APIServer(&in.APIServer, &out.APIServer, s); err != nil {
+ return err
+ }
+ out.OSCSecretName = in.OSCSecretName
+ out.TokenSecretName = in.TokenSecretName
+ out.Image = in.Image
+ out.HyperkubeImage = in.HyperkubeImage
+ out.KubernetesVersion = in.KubernetesVersion
+ out.KubeletDataVolumeSize = (*int64)(unsafe.Pointer(in.KubeletDataVolumeSize))
+ return nil
+}
+
+// Convert_v1alpha1_NodeAgentConfiguration_To_config_NodeAgentConfiguration is an autogenerated conversion function.
+func Convert_v1alpha1_NodeAgentConfiguration_To_config_NodeAgentConfiguration(in *NodeAgentConfiguration, out *config.NodeAgentConfiguration, s conversion.Scope) error {
+ return autoConvert_v1alpha1_NodeAgentConfiguration_To_config_NodeAgentConfiguration(in, out, s)
+}
+
+func autoConvert_config_NodeAgentConfiguration_To_v1alpha1_NodeAgentConfiguration(in *config.NodeAgentConfiguration, out *NodeAgentConfiguration, s conversion.Scope) error {
+ if err := Convert_config_APIServer_To_v1alpha1_APIServer(&in.APIServer, &out.APIServer, s); err != nil {
+ return err
+ }
+ out.OSCSecretName = in.OSCSecretName
+ out.TokenSecretName = in.TokenSecretName
+ out.Image = in.Image
+ out.HyperkubeImage = in.HyperkubeImage
+ out.KubernetesVersion = in.KubernetesVersion
+ out.KubeletDataVolumeSize = (*int64)(unsafe.Pointer(in.KubeletDataVolumeSize))
+ return nil
+}
+
+// Convert_config_NodeAgentConfiguration_To_v1alpha1_NodeAgentConfiguration is an autogenerated conversion function.
+func Convert_config_NodeAgentConfiguration_To_v1alpha1_NodeAgentConfiguration(in *config.NodeAgentConfiguration, out *NodeAgentConfiguration, s conversion.Scope) error {
+ return autoConvert_config_NodeAgentConfiguration_To_v1alpha1_NodeAgentConfiguration(in, out, s)
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/zz_generated.deepcopy.go b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.deepcopy.go
new file mode 100644
index 000000000000..68bbb9ffeb1c
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.deepcopy.go
@@ -0,0 +1,73 @@
+//go:build !ignore_autogenerated
+// +build !ignore_autogenerated
+
+/*
+Copyright SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by deepcopy-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ runtime "k8s.io/apimachinery/pkg/runtime"
+)
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *APIServer) DeepCopyInto(out *APIServer) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new APIServer.
+func (in *APIServer) DeepCopy() *APIServer {
+ if in == nil {
+ return nil
+ }
+ out := new(APIServer)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeAgentConfiguration) DeepCopyInto(out *NodeAgentConfiguration) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ out.APIServer = in.APIServer
+ if in.KubeletDataVolumeSize != nil {
+ in, out := &in.KubeletDataVolumeSize, &out.KubeletDataVolumeSize
+ *out = new(int64)
+ **out = **in
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAgentConfiguration.
+func (in *NodeAgentConfiguration) DeepCopy() *NodeAgentConfiguration {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeAgentConfiguration)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *NodeAgentConfiguration) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
diff --git a/pkg/nodeagent/apis/config/v1alpha1/zz_generated.defaults.go b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.defaults.go
new file mode 100644
index 000000000000..441363770fb7
--- /dev/null
+++ b/pkg/nodeagent/apis/config/v1alpha1/zz_generated.defaults.go
@@ -0,0 +1,33 @@
+//go:build !ignore_autogenerated
+// +build !ignore_autogenerated
+
+/*
+Copyright SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by defaulter-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ runtime "k8s.io/apimachinery/pkg/runtime"
+)
+
+// RegisterDefaults adds defaulters functions to the given scheme.
+// Public to allow building arbitrary schemes.
+// All generated defaulters are covering - they call all nested defaulters.
+func RegisterDefaults(scheme *runtime.Scheme) error {
+ return nil
+}
diff --git a/pkg/nodeagent/apis/config/validation/validation.go b/pkg/nodeagent/apis/config/validation/validation.go
new file mode 100644
index 000000000000..13861b0fbcc9
--- /dev/null
+++ b/pkg/nodeagent/apis/config/validation/validation.go
@@ -0,0 +1,61 @@
+// Copyright 2020 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package validation
+
+import (
+ "k8s.io/apimachinery/pkg/util/validation/field"
+
+ "github.com/gardener/gardener/pkg/nodeagent/apis/config"
+)
+
+// ValidateNodeAgentConfiguration validates the given `NodeAgentConfiguration`.
+func ValidateNodeAgentConfiguration(conf *config.NodeAgentConfiguration) field.ErrorList {
+ allErrs := field.ErrorList{}
+ fldPath := field.NewPath("nodeagent")
+
+ configFldPath := fldPath.Child("config")
+
+ if conf.APIVersion == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("apiversion"), "must provide a apiversion"))
+ }
+ if conf.HyperkubeImage == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("hyperkubeimage"), "must provide a hyperkubeimage"))
+ }
+ if conf.Image == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("image"), "must provide a image"))
+ }
+ if conf.KubernetesVersion == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("kubernetesversion"), "must provide a kubernetesversion"))
+ }
+ if conf.OSCSecretName == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("oscsecretname"), "must provide a oscsecretname"))
+ }
+ if conf.TokenSecretName == "" {
+ allErrs = append(allErrs, field.Required(configFldPath.Child("tokensecretname"), "must provide a tokensecretname"))
+ }
+
+ apiserverFldPath := configFldPath.Child("apiserver")
+
+ if conf.APIServer.URL == "" {
+ allErrs = append(allErrs, field.Required(apiserverFldPath.Child("url"), "must provide a url"))
+ }
+ if conf.APIServer.CA == "" {
+ allErrs = append(allErrs, field.Required(apiserverFldPath.Child("ca"), "must provide a ca"))
+ }
+ if conf.APIServer.BootstrapToken == "" {
+ allErrs = append(allErrs, field.Required(apiserverFldPath.Child("bootstraptoken"), "must provide a bootstraptoken"))
+ }
+ return allErrs
+}
diff --git a/pkg/nodeagent/apis/config/validation/validation_suite_test.go b/pkg/nodeagent/apis/config/validation/validation_suite_test.go
new file mode 100644
index 000000000000..7584d06162b1
--- /dev/null
+++ b/pkg/nodeagent/apis/config/validation/validation_suite_test.go
@@ -0,0 +1,27 @@
+// Copyright 2020 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package validation_test
+
+import (
+ "testing"
+
+ . "github.com/onsi/ginkgo/v2"
+ . "github.com/onsi/gomega"
+)
+
+func TestValidation(t *testing.T) {
+ RegisterFailHandler(Fail)
+ RunSpecs(t, "ControllerManager APIs Config Validation Suite")
+}
diff --git a/pkg/nodeagent/apis/config/validation/validation_test.go b/pkg/nodeagent/apis/config/validation/validation_test.go
new file mode 100644
index 000000000000..c152c5789214
--- /dev/null
+++ b/pkg/nodeagent/apis/config/validation/validation_test.go
@@ -0,0 +1,85 @@
+// Copyright 2020 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package validation_test
+
+import (
+ . "github.com/onsi/ginkgo/v2"
+ . "github.com/onsi/gomega"
+ . "github.com/onsi/gomega/gstruct"
+ "k8s.io/apimachinery/pkg/util/validation/field"
+
+ "github.com/gardener/gardener/pkg/nodeagent/apis/config"
+ . "github.com/gardener/gardener/pkg/nodeagent/apis/config/validation"
+)
+
+var _ = Describe("#ValidateNodeAgentConfiguration", func() {
+ var conf *config.NodeAgentConfiguration
+
+ BeforeEach(func() {
+ conf = &config.NodeAgentConfiguration{
+ APIServer: config.APIServer{},
+ }
+ })
+
+ Context("NodeAgentConfiguration", func() {
+ It("should pass because apiversion is specified", func() {
+ conf.APIVersion = "1.2.3"
+ conf.HyperkubeImage = "registry.com/hyperkube:v1.27.0"
+ conf.Image = "registry.com/node-agent:v1.73.0"
+ conf.KubernetesVersion = "v1.27.0"
+ conf.OSCSecretName = "osc-secret"
+ conf.TokenSecretName = "token-secret"
+ conf.APIServer.BootstrapToken = "bootstraptoken"
+ conf.APIServer.CA = "base64 encoded ca"
+ conf.APIServer.URL = "https://api.shoot.foo.bar"
+ errorList := ValidateNodeAgentConfiguration(conf)
+ Expect(errorList).To(BeEmpty())
+ })
+ It("should fail because apiversion config is not specified", func() {
+ conf.HyperkubeImage = "registry.com/hyperkube:v1.27.0"
+ conf.Image = "registry.com/node-agent:v1.73.0"
+ conf.KubernetesVersion = "v1.27.0"
+ conf.OSCSecretName = "osc-secret"
+ conf.TokenSecretName = "token-secret"
+ conf.APIServer.BootstrapToken = "bootstraptoken"
+ conf.APIServer.CA = "base64 encoded ca"
+ conf.APIServer.URL = "https://api.shoot.foo.bar"
+ errorList := ValidateNodeAgentConfiguration(conf)
+ Expect(errorList).To(ConsistOf(
+ PointTo(MatchFields(IgnoreExtras, Fields{
+ "Type": Equal(field.ErrorTypeRequired),
+ "Field": Equal("nodeagent.config.apiversion"),
+ })),
+ ))
+ })
+ It("should fail because apiserver.URL config is not specified", func() {
+ conf.APIVersion = "1.2.3"
+ conf.HyperkubeImage = "registry.com/hyperkube:v1.27.0"
+ conf.Image = "registry.com/node-agent:v1.73.0"
+ conf.KubernetesVersion = "v1.27.0"
+ conf.OSCSecretName = "osc-secret"
+ conf.TokenSecretName = "token-secret"
+ conf.APIServer.BootstrapToken = "bootstraptoken"
+ conf.APIServer.CA = "base64 encoded ca"
+ errorList := ValidateNodeAgentConfiguration(conf)
+ Expect(errorList).To(ConsistOf(
+ PointTo(MatchFields(IgnoreExtras, Fields{
+ "Type": Equal(field.ErrorTypeRequired),
+ "Field": Equal("nodeagent.config.apiserver.url"),
+ })),
+ ))
+ })
+ })
+})
diff --git a/pkg/nodeagent/apis/config/zz_generated.deepcopy.go b/pkg/nodeagent/apis/config/zz_generated.deepcopy.go
new file mode 100644
index 000000000000..778f01ef7619
--- /dev/null
+++ b/pkg/nodeagent/apis/config/zz_generated.deepcopy.go
@@ -0,0 +1,73 @@
+//go:build !ignore_autogenerated
+// +build !ignore_autogenerated
+
+/*
+Copyright SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by deepcopy-gen. DO NOT EDIT.
+
+package config
+
+import (
+ runtime "k8s.io/apimachinery/pkg/runtime"
+)
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *APIServer) DeepCopyInto(out *APIServer) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new APIServer.
+func (in *APIServer) DeepCopy() *APIServer {
+ if in == nil {
+ return nil
+ }
+ out := new(APIServer)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeAgentConfiguration) DeepCopyInto(out *NodeAgentConfiguration) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ out.APIServer = in.APIServer
+ if in.KubeletDataVolumeSize != nil {
+ in, out := &in.KubeletDataVolumeSize, &out.KubeletDataVolumeSize
+ *out = new(int64)
+ **out = **in
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAgentConfiguration.
+func (in *NodeAgentConfiguration) DeepCopy() *NodeAgentConfiguration {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeAgentConfiguration)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *NodeAgentConfiguration) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
diff --git a/pkg/nodeagent/dbus/dbus.go b/pkg/nodeagent/dbus/dbus.go
new file mode 100644
index 000000000000..e2e17c8c2684
--- /dev/null
+++ b/pkg/nodeagent/dbus/dbus.go
@@ -0,0 +1,158 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/coreos/go-systemd/v22/dbus"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/client-go/tools/record"
+)
+
+const done = "done"
+
+type Dbus interface {
+ // Enable the given units, same as executing systemctl enable unit
+ Enable(ctx context.Context, unitNames ...string) error
+ // Disable the given units, same as executing systemctl disable unit
+ Disable(ctx context.Context, unitNames ...string) error
+ // Start the given unit and record an event to the node object, same as executing systemctl start unit
+ Start(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error
+ // Stop the given unit and record an event to the node object, same as executing systemctl stop unit
+ Stop(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error
+ // Restart the given unit and record an event to the node object, same as executing systemctl restart unit
+ Restart(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error
+ // DaemonReload reload the systemd configuration, same as executing systemctl daemon-reload
+ DaemonReload(ctx context.Context) error
+}
+type db struct{}
+
+func New() Dbus {
+ return &db{}
+}
+
+func (*db) Enable(ctx context.Context, unitNames ...string) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ _, _, err = dbc.EnableUnitFilesContext(ctx, unitNames, false, true)
+ return err
+}
+
+func (*db) Disable(ctx context.Context, unitNames ...string) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ _, err = dbc.DisableUnitFilesContext(ctx, unitNames, false)
+ return err
+}
+
+func (*db) Stop(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ c := make(chan string)
+
+ if _, err := dbc.StopUnitContext(ctx, unitName, "replace", c); err != nil {
+ return fmt.Errorf("unable to stop unit %s: %w", unitName, err)
+ }
+
+ job := <-c
+ if job != done {
+ err = fmt.Errorf("stop failed for %s", job)
+ }
+
+ recordEvent(recorder, node, err, unitName, "SystemDUnitStop", "stop")
+ return err
+}
+
+func (*db) Start(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ c := make(chan string)
+
+ if _, err := dbc.StartUnitContext(ctx, unitName, "replace", c); err != nil {
+ return fmt.Errorf("unable to start unit %s: %w", unitName, err)
+ }
+
+ job := <-c
+ if job != done {
+ err = fmt.Errorf("start failed for %s", job)
+ }
+
+ recordEvent(recorder, node, err, unitName, "SystemDUnitStart", "start")
+ return err
+}
+
+func (*db) Restart(ctx context.Context, recorder record.EventRecorder, node *corev1.Node, unitName string) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ c := make(chan string)
+
+ if _, err := dbc.RestartUnitContext(ctx, unitName, "replace", c); err != nil {
+ return fmt.Errorf("unable to restart unit %s: %w", unitName, err)
+ }
+
+ job := <-c
+ if job != done {
+ err = fmt.Errorf("restart failed %s", job)
+ }
+
+ recordEvent(recorder, node, err, unitName, "SystemDUnitRestart", "restart")
+ return nil
+}
+
+func (*db) DaemonReload(ctx context.Context) error {
+ dbc, err := dbus.NewWithContext(ctx)
+ if err != nil {
+ return fmt.Errorf("unable to connect to dbus: %w", err)
+ }
+ defer dbc.Close()
+
+ if err := dbc.ReloadContext(ctx); err != nil {
+ return fmt.Errorf("systemd daemon-reload failed: %w", err)
+ }
+
+ return nil
+}
+
+func recordEvent(recorder record.EventRecorder, node *corev1.Node, err error, unitName, reason, msg string) {
+ if recorder != nil && node != nil {
+ if err == nil {
+ recorder.Event(node, corev1.EventTypeNormal, reason, fmt.Sprintf("successfully %s unit %q", msg, unitName))
+ } else {
+ recorder.Event(node, corev1.EventTypeWarning, reason, fmt.Sprintf("failed to %s unit %q %v", msg, unitName, err))
+ }
+ }
+}
diff --git a/pkg/nodeagent/dbus/dbus_suite_test.go b/pkg/nodeagent/dbus/dbus_suite_test.go
new file mode 100644
index 000000000000..b00f1be89cf5
--- /dev/null
+++ b/pkg/nodeagent/dbus/dbus_suite_test.go
@@ -0,0 +1,13 @@
+package dbus_test
+
+import (
+ "testing"
+
+ . "github.com/onsi/ginkgo/v2"
+ . "github.com/onsi/gomega"
+)
+
+func TestDbus(t *testing.T) {
+ RegisterFailHandler(Fail)
+ RunSpecs(t, "Dbus")
+}
diff --git a/pkg/nodeagent/dbus/dbus_test.go b/pkg/nodeagent/dbus/dbus_test.go
new file mode 100644
index 000000000000..9cad2874dabd
--- /dev/null
+++ b/pkg/nodeagent/dbus/dbus_test.go
@@ -0,0 +1,91 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus_test
+
+import (
+ "context"
+
+ . "github.com/onsi/ginkgo/v2"
+ . "github.com/onsi/gomega"
+
+ "github.com/gardener/gardener/pkg/nodeagent/dbus"
+)
+
+var _ = Describe("Dbus", func() {
+ It("should enable a unit", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.Enable(context.Background(), "kubelet")).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeEnable,
+ UnitNames: []string{"kubelet"},
+ },
+ }))
+ })
+
+ It("should disable a unit", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.Disable(context.Background(), "kubelet")).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeDisable,
+ UnitNames: []string{"kubelet"},
+ },
+ }))
+ })
+
+ It("should restart a unit", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.Restart(context.Background(), nil, nil, "kubelet")).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeRestart,
+ UnitNames: []string{"kubelet"},
+ },
+ }))
+ })
+
+ It("should start a unit", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.Start(context.Background(), nil, nil, "kubelet")).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeStart,
+ UnitNames: []string{"kubelet"},
+ },
+ }))
+ })
+
+ It("should stop a unit", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.Stop(context.Background(), nil, nil, "kubelet")).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeStop,
+ UnitNames: []string{"kubelet"},
+ },
+ }))
+ })
+
+ It("should reload deamon", func() {
+ d := &dbus.FakeDbus{}
+ Expect(d.DaemonReload(context.Background())).Should(Succeed())
+ Expect(d.Actions).Should(Equal([]dbus.FakeSystemdAction{
+ {
+ Action: dbus.FakeDeamonReload,
+ },
+ }))
+ })
+})
diff --git a/pkg/nodeagent/dbus/fakedbus.go b/pkg/nodeagent/dbus/fakedbus.go
new file mode 100644
index 000000000000..2dc4c0388f79
--- /dev/null
+++ b/pkg/nodeagent/dbus/fakedbus.go
@@ -0,0 +1,82 @@
+package dbus
+
+import (
+ "context"
+
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/client-go/tools/record"
+)
+
+type FakeAction int
+
+const (
+ FakeDeamonReload FakeAction = iota
+ FakeDisable
+ FakeEnable
+ FakeRestart
+ FakeStart
+ FakeStop
+)
+
+type FakeSystemdAction struct {
+ Action FakeAction
+ UnitNames []string
+}
+
+type FakeDbus struct {
+ Actions []FakeSystemdAction
+}
+
+// DaemonReload implements Dbus.
+func (f *FakeDbus) DaemonReload(ctx context.Context) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeDeamonReload,
+ })
+ return nil
+}
+
+// Disable implements Dbus.
+func (f *FakeDbus) Disable(ctx context.Context, unitNames ...string) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeDisable,
+ UnitNames: unitNames,
+ })
+ return nil
+}
+
+// Enable implements Dbus.
+func (f *FakeDbus) Enable(ctx context.Context, unitNames ...string) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeEnable,
+ UnitNames: unitNames,
+ })
+
+ return nil
+}
+
+// Restart implements Dbus.
+func (f *FakeDbus) Restart(ctx context.Context, recorder record.EventRecorder, node *v1.Node, unitName string) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeRestart,
+ UnitNames: []string{unitName},
+ })
+ return nil
+}
+
+// Start implements Dbus.
+func (f *FakeDbus) Start(ctx context.Context, recorder record.EventRecorder, node *v1.Node, unitName string) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeStart,
+ UnitNames: []string{unitName},
+ })
+ return nil
+}
+
+// Stop implements Dbus.
+func (f *FakeDbus) Stop(ctx context.Context, recorder record.EventRecorder, node *v1.Node, unitName string) error {
+ f.Actions = append(f.Actions, FakeSystemdAction{
+ Action: FakeStop,
+ UnitNames: []string{unitName},
+ })
+ return nil
+}
diff --git a/pkg/nodeagent/registry/fake_extractor.go b/pkg/nodeagent/registry/fake_extractor.go
new file mode 100644
index 000000000000..fe50bd2d6569
--- /dev/null
+++ b/pkg/nodeagent/registry/fake_extractor.go
@@ -0,0 +1,34 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package registry
+
+type FakeRegistryExtractor struct {
+ Extractions []FakeExtraction
+}
+
+type FakeExtraction struct {
+ Image string
+ PathSuffix string
+ Dest string
+}
+
+func (f *FakeRegistryExtractor) ExtractFromLayer(image, pathSuffix, dest string) error {
+ f.Extractions = append(f.Extractions, FakeExtraction{
+ Image: image,
+ PathSuffix: pathSuffix,
+ Dest: dest,
+ })
+ return nil
+}
diff --git a/pkg/nodeagent/registry/registry.go b/pkg/nodeagent/registry/registry.go
new file mode 100644
index 000000000000..43f306ca8a96
--- /dev/null
+++ b/pkg/nodeagent/registry/registry.go
@@ -0,0 +1,9 @@
+package registry
+
+type Extractor interface {
+ ExtractFromLayer(image, pathSuffix, dest string) error
+}
+
+func NewExtractor() Extractor {
+ return remoteExtractor{}
+}
diff --git a/pkg/nodeagent/registry/remote_extractor.go b/pkg/nodeagent/registry/remote_extractor.go
new file mode 100644
index 000000000000..db8d3db90788
--- /dev/null
+++ b/pkg/nodeagent/registry/remote_extractor.go
@@ -0,0 +1,120 @@
+// Copyright 2023 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package registry
+
+import (
+ "archive/tar"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "runtime"
+ "strings"
+
+ "github.com/google/go-containerregistry/pkg/name"
+ containerregistryv1 "github.com/google/go-containerregistry/pkg/v1"
+ "github.com/google/go-containerregistry/pkg/v1/remote"
+)
+
+type remoteExtractor struct{}
+
+func (remoteExtractor) ExtractFromLayer(image, pathSuffix, dest string) error {
+ // In the local environment, we pull Gardener images built via skaffold from the local registry running in the kind
+ // cluster. However, on local machine pods, `localhost:5001` does obviously not lead to this registry. Hence, we
+ // have to replace it with `garden.local.gardener.cloud:5001` which allows accessing the registry from both local
+ // machine and machine pods.
+ image = strings.ReplaceAll(image, "localhost:5001", "garden.local.gardener.cloud:5001")
+
+ imageRef, err := name.ParseReference(image, name.Insecure)
+ if err != nil {
+ return fmt.Errorf("unable to parse reference: %w", err)
+ }
+
+ remoteImage, err := remote.Image(imageRef, remote.WithPlatform(containerregistryv1.Platform{OS: "linux", Architecture: runtime.GOARCH}))
+ if err != nil {
+ return fmt.Errorf("unable access remote image reference: %w", err)
+ }
+
+ layers, err := remoteImage.Layers()
+ if err != nil {
+ return fmt.Errorf("unable retrieve image layers: %w", err)
+ }
+
+ success := false
+
+ for _, layer := range layers {
+ buffer, err := layer.Uncompressed()
+ if err != nil {
+ return fmt.Errorf("unable to get reader for uncompressed layer: %w", err)
+ }
+
+ if err = extractTarGz(buffer, pathSuffix, dest); err != nil {
+ if errors.Is(err, notFound) {
+ continue
+ }
+ return fmt.Errorf("unable to extract tarball to file system: %w", err)
+ }
+
+ success = true
+ break
+ }
+
+ if !success {
+ return fmt.Errorf("did not find file %q in layer", pathSuffix)
+ }
+
+ return nil
+}
+
+var notFound = errors.New("file not contained in tar")
+
+func extractTarGz(uncompressedStream io.Reader, filenameInArchive, destinationOnOS string) error {
+ var (
+ tarReader = tar.NewReader(uncompressedStream)
+ header *tar.Header
+ err error
+ )
+
+ for header, err = tarReader.Next(); err == nil; header, err = tarReader.Next() {
+ switch header.Typeflag {
+ case tar.TypeReg:
+ if !strings.HasSuffix(header.Name, filenameInArchive) {
+ continue
+ }
+
+ tmpDest := destinationOnOS + ".tmp"
+
+ outFile, err := os.OpenFile(tmpDest, os.O_CREATE|os.O_RDWR, 0755)
+ if err != nil {
+ return fmt.Errorf("create file failed: %w", err)
+ }
+
+ defer outFile.Close()
+
+ if _, err := io.Copy(outFile, tarReader); err != nil {
+ return fmt.Errorf("copying file from tarball failed: %w", err)
+ }
+
+ return os.Rename(tmpDest, destinationOnOS)
+ default:
+ continue
+ }
+ }
+ if err != io.EOF {
+ return fmt.Errorf("iterating tar files failed: %w", err)
+ }
+
+ return notFound
+}
diff --git a/pkg/utils/images/images.go b/pkg/utils/images/images.go
index f7aa1a1db9ae..08876aac6aa7 100644
--- a/pkg/utils/images/images.go
+++ b/pkg/utils/images/images.go
@@ -51,6 +51,8 @@ const (
ImageNameFluentBitPluginInstaller = "fluent-bit-plugin-installer"
// ImageNameFluentOperator is a constant for an image in the image vector with name 'fluent-operator'.
ImageNameFluentOperator = "fluent-operator"
+ // ImageNameGardenerNodeAgent is a constant for an image in the image vector with name 'gardener-node-agent'.
+ ImageNameGardenerNodeAgent = "gardener-node-agent"
// ImageNameGardenerResourceManager is a constant for an image in the image vector with name 'gardener-resource-manager'.
ImageNameGardenerResourceManager = "gardener-resource-manager"
// ImageNameGardenlet is a constant for an image in the image vector with name 'gardenlet'.
@@ -87,6 +89,8 @@ const (
ImageNameMetricsServer = "metrics-server"
// ImageNameNginxIngressController is a constant for an image in the image vector with name 'nginx-ingress-controller'.
ImageNameNginxIngressController = "nginx-ingress-controller"
+ // ImageNameNginxIngressControllerSeed is a constant for an image in the image vector with name 'nginx-ingress-controller-seed'.
+ ImageNameNginxIngressControllerSeed = "nginx-ingress-controller-seed"
// ImageNameNodeExporter is a constant for an image in the image vector with name 'node-exporter'.
ImageNameNodeExporter = "node-exporter"
// ImageNameNodeLocalDns is a constant for an image in the image vector with name 'node-local-dns'.
diff --git a/vendor/github.com/BurntSushi/toml/.gitignore b/vendor/github.com/BurntSushi/toml/.gitignore
index cd11be965309..fe79e3adda29 100644
--- a/vendor/github.com/BurntSushi/toml/.gitignore
+++ b/vendor/github.com/BurntSushi/toml/.gitignore
@@ -1,2 +1,2 @@
-toml.test
+/toml.test
/toml-test
diff --git a/vendor/github.com/BurntSushi/toml/COMPATIBLE b/vendor/github.com/BurntSushi/toml/COMPATIBLE
deleted file mode 100644
index f621b01196cb..000000000000
--- a/vendor/github.com/BurntSushi/toml/COMPATIBLE
+++ /dev/null
@@ -1 +0,0 @@
-Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
diff --git a/vendor/github.com/BurntSushi/toml/README.md b/vendor/github.com/BurntSushi/toml/README.md
index cc13f8667fbd..3651cfa96092 100644
--- a/vendor/github.com/BurntSushi/toml/README.md
+++ b/vendor/github.com/BurntSushi/toml/README.md
@@ -1,6 +1,5 @@
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
-reflection interface similar to Go's standard library `json` and `xml`
-packages.
+reflection interface similar to Go's standard library `json` and `xml` packages.
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
@@ -10,7 +9,7 @@ See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show
v0.4.0`).
-This library requires Go 1.13 or newer; install it with:
+This library requires Go 1.13 or newer; add it to your go.mod with:
% go get github.com/BurntSushi/toml@latest
@@ -19,16 +18,7 @@ It also comes with a TOML validator CLI tool:
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
% tomlv some-toml-file.toml
-### Testing
-This package passes all tests in [toml-test] for both the decoder and the
-encoder.
-
-[toml-test]: https://github.com/BurntSushi/toml-test
-
### Examples
-This package works similar to how the Go standard library handles XML and JSON.
-Namely, data is loaded into Go values via reflection.
-
For the simplest example, consider some TOML file as just a list of keys and
values:
@@ -40,7 +30,7 @@ Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
-Which could be defined in Go as:
+Which can be decoded with:
```go
type Config struct {
@@ -48,20 +38,15 @@ type Config struct {
Cats []string
Pi float64
Perfection []int
- DOB time.Time // requires `import time`
+ DOB time.Time
}
-```
-
-And then decoded with:
-```go
var conf Config
-err := toml.Decode(tomlData, &conf)
-// handle error
+_, err := toml.Decode(tomlData, &conf)
```
-You can also use struct tags if your struct field name doesn't map to a TOML
-key value directly:
+You can also use struct tags if your struct field name doesn't map to a TOML key
+value directly:
```toml
some_key_NAME = "wat"
@@ -73,139 +58,63 @@ type TOML struct {
}
```
-Beware that like other most other decoders **only exported fields** are
-considered when encoding and decoding; private fields are silently ignored.
+Beware that like other decoders **only exported fields** are considered when
+encoding and decoding; private fields are silently ignored.
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
-Here's an example that automatically parses duration strings into
-`time.Duration` values:
+Here's an example that automatically parses values in a `mail.Address`:
```toml
-[[song]]
-name = "Thunder Road"
-duration = "4m49s"
-
-[[song]]
-name = "Stairway to Heaven"
-duration = "8m03s"
-```
-
-Which can be decoded with:
-
-```go
-type song struct {
- Name string
- Duration duration
-}
-type songs struct {
- Song []song
-}
-var favorites songs
-if _, err := toml.Decode(blob, &favorites); err != nil {
- log.Fatal(err)
-}
-
-for _, s := range favorites.Song {
- fmt.Printf("%s (%s)\n", s.Name, s.Duration)
-}
+contacts = [
+ "Donald Duck ",
+ "Scrooge McDuck ",
+]
```
-And you'll also need a `duration` type that satisfies the
-`encoding.TextUnmarshaler` interface:
+Can be decoded with:
```go
-type duration struct {
- time.Duration
+// Create address type which satisfies the encoding.TextUnmarshaler interface.
+type address struct {
+ *mail.Address
}
-func (d *duration) UnmarshalText(text []byte) error {
+func (a *address) UnmarshalText(text []byte) error {
var err error
- d.Duration, err = time.ParseDuration(string(text))
+ a.Address, err = mail.ParseAddress(string(text))
return err
}
+
+// Decode it.
+func decode() {
+ blob := `
+ contacts = [
+ "Donald Duck ",
+ "Scrooge McDuck ",
+ ]
+ `
+
+ var contacts struct {
+ Contacts []address
+ }
+
+ _, err := toml.Decode(blob, &contacts)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ for _, c := range contacts.Contacts {
+ fmt.Printf("%#v\n", c.Address)
+ }
+
+ // Output:
+ // &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
+ // &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
+}
```
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
a similar way.
### More complex usage
-Here's an example of how to load the example from the official spec page:
-
-```toml
-# This is a TOML document. Boom.
-
-title = "TOML Example"
-
-[owner]
-name = "Tom Preston-Werner"
-organization = "GitHub"
-bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
-dob = 1979-05-27T07:32:00Z # First class dates? Why not?
-
-[database]
-server = "192.168.1.1"
-ports = [ 8001, 8001, 8002 ]
-connection_max = 5000
-enabled = true
-
-[servers]
-
- # You can indent as you please. Tabs or spaces. TOML don't care.
- [servers.alpha]
- ip = "10.0.0.1"
- dc = "eqdc10"
-
- [servers.beta]
- ip = "10.0.0.2"
- dc = "eqdc10"
-
-[clients]
-data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
-
-# Line breaks are OK when inside arrays
-hosts = [
- "alpha",
- "omega"
-]
-```
-
-And the corresponding Go types are:
-
-```go
-type tomlConfig struct {
- Title string
- Owner ownerInfo
- DB database `toml:"database"`
- Servers map[string]server
- Clients clients
-}
-
-type ownerInfo struct {
- Name string
- Org string `toml:"organization"`
- Bio string
- DOB time.Time
-}
-
-type database struct {
- Server string
- Ports []int
- ConnMax int `toml:"connection_max"`
- Enabled bool
-}
-
-type server struct {
- IP string
- DC string
-}
-
-type clients struct {
- Data [][]interface{}
- Hosts []string
-}
-```
-
-Note that a case insensitive match will be tried if an exact match can't be
-found.
-
-A working example of the above can be found in `_example/example.{go,toml}`.
+See the [`_example/`](/_example) directory for a more complex example.
diff --git a/vendor/github.com/BurntSushi/toml/decode.go b/vendor/github.com/BurntSushi/toml/decode.go
index e24f0c5d5c04..0ca1dc4fee5f 100644
--- a/vendor/github.com/BurntSushi/toml/decode.go
+++ b/vendor/github.com/BurntSushi/toml/decode.go
@@ -1,14 +1,18 @@
package toml
import (
+ "bytes"
"encoding"
+ "encoding/json"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
+ "strconv"
"strings"
+ "time"
)
// Unmarshaler is the interface implemented by objects that can unmarshal a
@@ -17,16 +21,35 @@ type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
-// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
-func Unmarshal(p []byte, v interface{}) error {
- _, err := Decode(string(p), v)
+// Unmarshal decodes the contents of data in TOML format into a pointer v.
+//
+// See [Decoder] for a description of the decoding process.
+func Unmarshal(data []byte, v interface{}) error {
+ _, err := NewDecoder(bytes.NewReader(data)).Decode(v)
return err
}
+// Decode the TOML data in to the pointer v.
+//
+// See [Decoder] for a description of the decoding process.
+func Decode(data string, v interface{}) (MetaData, error) {
+ return NewDecoder(strings.NewReader(data)).Decode(v)
+}
+
+// DecodeFile reads the contents of a file and decodes it with [Decode].
+func DecodeFile(path string, v interface{}) (MetaData, error) {
+ fp, err := os.Open(path)
+ if err != nil {
+ return MetaData{}, err
+ }
+ defer fp.Close()
+ return NewDecoder(fp).Decode(v)
+}
+
// Primitive is a TOML value that hasn't been decoded into a Go value.
//
// This type can be used for any value, which will cause decoding to be delayed.
-// You can use the PrimitiveDecode() function to "manually" decode these values.
+// You can use [PrimitiveDecode] to "manually" decode these values.
//
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
@@ -42,36 +65,22 @@ type Primitive struct {
// The significand precision for float32 and float64 is 24 and 53 bits; this is
// the range a natural number can be stored in a float without loss of data.
const (
- maxSafeFloat32Int = 16777215 // 2^24-1
- maxSafeFloat64Int = 9007199254740991 // 2^53-1
+ maxSafeFloat32Int = 16777215 // 2^24-1
+ maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
)
-// PrimitiveDecode is just like the other `Decode*` functions, except it
-// decodes a TOML value that has already been parsed. Valid primitive values
-// can *only* be obtained from values filled by the decoder functions,
-// including this method. (i.e., `v` may contain more `Primitive`
-// values.)
-//
-// Meta data for primitive values is included in the meta data returned by
-// the `Decode*` functions with one exception: keys returned by the Undecoded
-// method will only reflect keys that were decoded. Namely, any keys hidden
-// behind a Primitive will be considered undecoded. Executing this method will
-// update the undecoded keys in the meta data. (See the example.)
-func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
- md.context = primValue.context
- defer func() { md.context = nil }()
- return md.unify(primValue.undecoded, rvalue(v))
-}
-
// Decoder decodes TOML data.
//
-// TOML tables correspond to Go structs or maps (dealer's choice – they can be
-// used interchangeably).
+// TOML tables correspond to Go structs or maps; they can be used
+// interchangeably, but structs offer better type safety.
//
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
-// TOML datetimes correspond to Go time.Time values. Local datetimes are parsed
-// in the local timezone.
+// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
+// local timezone.
+//
+// [time.Duration] types are treated as nanoseconds if the TOML value is an
+// integer, or they're parsed with time.ParseDuration() if they're strings.
//
// All other TOML types (float, string, int, bool and array) correspond to the
// obvious Go types.
@@ -80,9 +89,9 @@ func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
// interface, in which case any primitive TOML value (floats, strings, integers,
// booleans, datetimes) will be converted to a []byte and given to the value's
// UnmarshalText method. See the Unmarshaler example for a demonstration with
-// time duration strings.
+// email addresses.
//
-// Key mapping
+// ### Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
@@ -109,6 +118,7 @@ func NewDecoder(r io.Reader) *Decoder {
var (
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
+ primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
)
// Decode TOML data in to the pointer `v`.
@@ -120,10 +130,10 @@ func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
s = "%v"
}
- return MetaData{}, e("cannot decode to non-pointer "+s, reflect.TypeOf(v))
+ return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
}
if rv.IsNil() {
- return MetaData{}, e("cannot decode to nil value of %q", reflect.TypeOf(v))
+ return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
}
// Check if this is a supported type: struct, map, interface{}, or something
@@ -133,7 +143,7 @@ func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
- return MetaData{}, e("cannot decode to type %s", rt)
+ return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
}
// TODO: parser should read from io.Reader? Or at the very least, make it
@@ -150,30 +160,29 @@ func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
md := MetaData{
mapping: p.mapping,
- types: p.types,
+ keyInfo: p.keyInfo,
keys: p.ordered,
decoded: make(map[string]struct{}, len(p.ordered)),
context: nil,
+ data: data,
}
return md, md.unify(p.mapping, rv)
}
-// Decode the TOML data in to the pointer v.
+// PrimitiveDecode is just like the other Decode* functions, except it decodes a
+// TOML value that has already been parsed. Valid primitive values can *only* be
+// obtained from values filled by the decoder functions, including this method.
+// (i.e., v may contain more [Primitive] values.)
//
-// See the documentation on Decoder for a description of the decoding process.
-func Decode(data string, v interface{}) (MetaData, error) {
- return NewDecoder(strings.NewReader(data)).Decode(v)
-}
-
-// DecodeFile is just like Decode, except it will automatically read the
-// contents of the file at path and decode it for you.
-func DecodeFile(path string, v interface{}) (MetaData, error) {
- fp, err := os.Open(path)
- if err != nil {
- return MetaData{}, err
- }
- defer fp.Close()
- return NewDecoder(fp).Decode(v)
+// Meta data for primitive values is included in the meta data returned by the
+// Decode* functions with one exception: keys returned by the Undecoded method
+// will only reflect keys that were decoded. Namely, any keys hidden behind a
+// Primitive will be considered undecoded. Executing this method will update the
+// undecoded keys in the meta data. (See the example.)
+func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md.context = primValue.context
+ defer func() { md.context = nil }()
+ return md.unify(primValue.undecoded, rvalue(v))
}
// unify performs a sort of type unification based on the structure of `rv`,
@@ -184,7 +193,7 @@ func DecodeFile(path string, v interface{}) (MetaData, error) {
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
// TODO: #76 would make this superfluous after implemented.
- if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
+ if rv.Type() == primitiveType {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
@@ -196,17 +205,14 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
return nil
}
- // Special case. Unmarshaler Interface support.
- if rv.CanAddr() {
- if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
- return v.UnmarshalTOML(data)
- }
+ rvi := rv.Interface()
+ if v, ok := rvi.(Unmarshaler); ok {
+ return v.UnmarshalTOML(data)
}
-
- // Special case. Look for a value satisfying the TextUnmarshaler interface.
- if v, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
+ if v, ok := rvi.(encoding.TextUnmarshaler); ok {
return md.unifyText(data, v)
}
+
// TODO:
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
@@ -217,7 +223,6 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
k := rv.Kind()
- // laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
@@ -243,15 +248,14 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
- // we only support empty interfaces.
- if rv.NumMethod() > 0 {
- return e("unsupported type %s", rv.Type())
+ if rv.NumMethod() > 0 { // Only support empty interfaces are supported.
+ return md.e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32, reflect.Float64:
return md.unifyFloat64(data, rv)
}
- return e("unsupported type %s", rv.Kind())
+ return md.e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
@@ -260,7 +264,7 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
if mapping == nil {
return nil
}
- return e("type mismatch for %s: expected table but found %T",
+ return md.e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
@@ -286,13 +290,14 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = struct{}{}
md.context = append(md.context, key)
+
err := md.unify(datum, subv)
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
- return e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
+ return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
}
}
}
@@ -300,10 +305,10 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
- if k := rv.Type().Key().Kind(); k != reflect.String {
- return fmt.Errorf(
- "toml: cannot decode to a map with non-string key type (%s in %q)",
- k, rv.Type())
+ keyType := rv.Type().Key().Kind()
+ if keyType != reflect.String && keyType != reflect.Interface {
+ return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
+ keyType, rv.Type())
}
tmap, ok := mapping.(map[string]interface{})
@@ -321,13 +326,22 @@ func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
md.context = append(md.context, k)
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
- if err := md.unify(v, rvval); err != nil {
+
+ err := md.unify(v, indirect(rvval))
+ if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey := indirect(reflect.New(rv.Type().Key()))
- rvkey.SetString(k)
+
+ switch keyType {
+ case reflect.Interface:
+ rvkey.Set(reflect.ValueOf(k))
+ case reflect.String:
+ rvkey.SetString(k)
+ }
+
rv.SetMapIndex(rvkey, rvval)
}
return nil
@@ -342,7 +356,7 @@ func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
return md.badtype("slice", data)
}
if l := datav.Len(); l != rv.Len() {
- return e("expected array length %d; got TOML array of length %d", rv.Len(), l)
+ return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
}
return md.unifySliceArray(datav, rv)
}
@@ -375,6 +389,18 @@ func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
+ _, ok := rv.Interface().(json.Number)
+ if ok {
+ if i, ok := data.(int64); ok {
+ rv.SetString(strconv.FormatInt(i, 10))
+ } else if f, ok := data.(float64); ok {
+ rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
+ } else {
+ return md.badtype("string", data)
+ }
+ return nil
+ }
+
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
@@ -383,11 +409,13 @@ func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
+ rvk := rv.Kind()
+
if num, ok := data.(float64); ok {
- switch rv.Kind() {
+ switch rvk {
case reflect.Float32:
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
- return e("value %f is out of range for float32", num)
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
fallthrough
case reflect.Float64:
@@ -399,20 +427,11 @@ func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
}
if num, ok := data.(int64); ok {
- switch rv.Kind() {
- case reflect.Float32:
- if num < -maxSafeFloat32Int || num > maxSafeFloat32Int {
- return e("value %d is out of range for float32", num)
- }
- fallthrough
- case reflect.Float64:
- if num < -maxSafeFloat64Int || num > maxSafeFloat64Int {
- return e("value %d is out of range for float64", num)
- }
- rv.SetFloat(float64(num))
- default:
- panic("bug")
+ if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
+ (rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
+ rv.SetFloat(float64(num))
return nil
}
@@ -420,50 +439,46 @@ func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
- if num, ok := data.(int64); ok {
- if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
- switch rv.Kind() {
- case reflect.Int, reflect.Int64:
- // No bounds checking necessary.
- case reflect.Int8:
- if num < math.MinInt8 || num > math.MaxInt8 {
- return e("value %d is out of range for int8", num)
- }
- case reflect.Int16:
- if num < math.MinInt16 || num > math.MaxInt16 {
- return e("value %d is out of range for int16", num)
- }
- case reflect.Int32:
- if num < math.MinInt32 || num > math.MaxInt32 {
- return e("value %d is out of range for int32", num)
- }
+ _, ok := rv.Interface().(time.Duration)
+ if ok {
+ // Parse as string duration, and fall back to regular integer parsing
+ // (as nanosecond) if this is not a string.
+ if s, ok := data.(string); ok {
+ dur, err := time.ParseDuration(s)
+ if err != nil {
+ return md.parseErr(errParseDuration{s})
}
- rv.SetInt(num)
- } else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
- unum := uint64(num)
- switch rv.Kind() {
- case reflect.Uint, reflect.Uint64:
- // No bounds checking necessary.
- case reflect.Uint8:
- if num < 0 || unum > math.MaxUint8 {
- return e("value %d is out of range for uint8", num)
- }
- case reflect.Uint16:
- if num < 0 || unum > math.MaxUint16 {
- return e("value %d is out of range for uint16", num)
- }
- case reflect.Uint32:
- if num < 0 || unum > math.MaxUint32 {
- return e("value %d is out of range for uint32", num)
- }
- }
- rv.SetUint(unum)
- } else {
- panic("unreachable")
+ rv.SetInt(int64(dur))
+ return nil
}
- return nil
}
- return md.badtype("integer", data)
+
+ num, ok := data.(int64)
+ if !ok {
+ return md.badtype("integer", data)
+ }
+
+ rvk := rv.Kind()
+ switch {
+ case rvk >= reflect.Int && rvk <= reflect.Int64:
+ if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
+ (rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
+ (rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ rv.SetInt(num)
+ case rvk >= reflect.Uint && rvk <= reflect.Uint64:
+ unum := uint64(num)
+ if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
+ rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
+ rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
+ return md.parseErr(errParseRange{i: num, size: rvk.String()})
+ }
+ rv.SetUint(unum)
+ default:
+ panic("unreachable")
+ }
+ return nil
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
@@ -488,7 +503,7 @@ func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) erro
return err
}
s = string(text)
- case TextMarshaler:
+ case encoding.TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
@@ -514,7 +529,30 @@ func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) erro
}
func (md *MetaData) badtype(dst string, data interface{}) error {
- return e("incompatible types: TOML key %q has type %T; destination has type %s", md.context, data, dst)
+ return md.e("incompatible types: TOML value has type %T; destination has type %s", data, dst)
+}
+
+func (md *MetaData) parseErr(err error) error {
+ k := md.context.String()
+ return ParseError{
+ LastKey: k,
+ Position: md.keyInfo[k].pos,
+ Line: md.keyInfo[k].pos.Line,
+ err: err,
+ input: string(md.data),
+ }
+}
+
+func (md *MetaData) e(format string, args ...interface{}) error {
+ f := "toml: "
+ if len(md.context) > 0 {
+ f = fmt.Sprintf("toml: (last key %q): ", md.context)
+ p := md.keyInfo[md.context.String()].pos
+ if p.Line > 0 {
+ f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
+ }
+ }
+ return fmt.Errorf(f+format, args...)
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
@@ -533,7 +571,11 @@ func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
- if _, ok := pv.Interface().(encoding.TextUnmarshaler); ok {
+ pvi := pv.Interface()
+ if _, ok := pvi.(encoding.TextUnmarshaler); ok {
+ return pv
+ }
+ if _, ok := pvi.(Unmarshaler); ok {
return pv
}
}
@@ -549,12 +591,12 @@ func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
- if _, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
+ rvi := rv.Interface()
+ if _, ok := rvi.(encoding.TextUnmarshaler); ok {
+ return true
+ }
+ if _, ok := rvi.(Unmarshaler); ok {
return true
}
return false
}
-
-func e(format string, args ...interface{}) error {
- return fmt.Errorf("toml: "+format, args...)
-}
diff --git a/vendor/github.com/BurntSushi/toml/decode_go116.go b/vendor/github.com/BurntSushi/toml/decode_go116.go
index eddfb641b862..086d0b68664f 100644
--- a/vendor/github.com/BurntSushi/toml/decode_go116.go
+++ b/vendor/github.com/BurntSushi/toml/decode_go116.go
@@ -7,8 +7,8 @@ import (
"io/fs"
)
-// DecodeFS is just like Decode, except it will automatically read the contents
-// of the file at `path` from a fs.FS instance.
+// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
+// [Decode].
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {
diff --git a/vendor/github.com/BurntSushi/toml/doc.go b/vendor/github.com/BurntSushi/toml/doc.go
index 099c4a77d2d3..81a7c0fe9f6d 100644
--- a/vendor/github.com/BurntSushi/toml/doc.go
+++ b/vendor/github.com/BurntSushi/toml/doc.go
@@ -1,13 +1,11 @@
-/*
-Package toml implements decoding and encoding of TOML files.
-
-This package supports TOML v1.0.0, as listed on https://toml.io
-
-There is also support for delaying decoding with the Primitive type, and
-querying the set of keys in a TOML document with the MetaData type.
-
-The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
-and can be used to verify if TOML document is valid. It can also be used to
-print the type of each key.
-*/
+// Package toml implements decoding and encoding of TOML files.
+//
+// This package supports TOML v1.0.0, as specified at https://toml.io
+//
+// There is also support for delaying decoding with the Primitive type, and
+// querying the set of keys in a TOML document with the MetaData type.
+//
+// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
+// and can be used to verify if TOML document is valid. It can also be used to
+// print the type of each key.
package toml
diff --git a/vendor/github.com/BurntSushi/toml/encode.go b/vendor/github.com/BurntSushi/toml/encode.go
index dee4e6d31961..930e1d521ac6 100644
--- a/vendor/github.com/BurntSushi/toml/encode.go
+++ b/vendor/github.com/BurntSushi/toml/encode.go
@@ -3,6 +3,7 @@ package toml
import (
"bufio"
"encoding"
+ "encoding/json"
"errors"
"fmt"
"io"
@@ -63,6 +64,12 @@ var dblQuotedReplacer = strings.NewReplacer(
"\x7f", `\u007f`,
)
+var (
+ marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
+ marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
+ timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
+)
+
// Marshaler is the interface implemented by types that can marshal themselves
// into valid TOML.
type Marshaler interface {
@@ -72,9 +79,12 @@ type Marshaler interface {
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
-// for the Decode* functions.
+// for [Decode].
//
-// The toml.Marshaler and encoder.TextMarshaler interfaces are supported to
+// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
+// representation.
+//
+// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
// encoding the value as custom TOML.
//
// If you want to write arbitrary binary data then you will need to use
@@ -85,6 +95,17 @@ type Marshaler interface {
//
// Go maps will be sorted alphabetically by key for deterministic output.
//
+// The toml struct tag can be used to provide the key name; if omitted the
+// struct field name will be used. If the "omitempty" option is present the
+// following value will be skipped:
+//
+// - arrays, slices, maps, and string with len of 0
+// - struct with all zero values
+// - bool false
+//
+// If omitzero is given all int and float types with a value of 0 will be
+// skipped.
+//
// Encoding Go values without a corresponding TOML representation will return an
// error. Examples of this includes maps with non-string keys, slices with nil
// elements, embedded non-struct types, and nested slices containing maps or
@@ -109,7 +130,7 @@ func NewEncoder(w io.Writer) *Encoder {
}
}
-// Encode writes a TOML representation of the Go value to the Encoder's writer.
+// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
//
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
@@ -136,18 +157,15 @@ func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
- // Special case: time needs to be in ISO8601 format.
- //
- // Special case: if we can marshal the type to text, then we used that. This
- // prevents the encoder for handling these types as generic structs (or
- // whatever the underlying type of a TextMarshaler is).
- switch t := rv.Interface().(type) {
- case time.Time, encoding.TextMarshaler, Marshaler:
+ // If we can marshal the type to text, then we use that. This prevents the
+ // encoder for handling these types as generic structs (or whatever the
+ // underlying type of a TextMarshaler is).
+ switch {
+ case isMarshaler(rv):
enc.writeKeyValue(key, rv, false)
return
- // TODO: #76 would make this superfluous after implemented.
- case Primitive:
- enc.encode(key, reflect.ValueOf(t.undecoded))
+ case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
+ enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
return
}
@@ -212,18 +230,44 @@ func (enc *Encoder) eElement(rv reflect.Value) {
if err != nil {
encPanic(err)
}
- enc.writeQuoted(string(s))
+ if s == nil {
+ encPanic(errors.New("MarshalTOML returned nil and no error"))
+ }
+ enc.w.Write(s)
return
case encoding.TextMarshaler:
s, err := v.MarshalText()
if err != nil {
encPanic(err)
}
+ if s == nil {
+ encPanic(errors.New("MarshalText returned nil and no error"))
+ }
enc.writeQuoted(string(s))
return
+ case time.Duration:
+ enc.writeQuoted(v.String())
+ return
+ case json.Number:
+ n, _ := rv.Interface().(json.Number)
+
+ if n == "" { /// Useful zero value.
+ enc.w.WriteByte('0')
+ return
+ } else if v, err := n.Int64(); err == nil {
+ enc.eElement(reflect.ValueOf(v))
+ return
+ } else if v, err := n.Float64(); err == nil {
+ enc.eElement(reflect.ValueOf(v))
+ return
+ }
+ encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
}
switch rv.Kind() {
+ case reflect.Ptr:
+ enc.eElement(rv.Elem())
+ return
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
@@ -259,7 +303,7 @@ func (enc *Encoder) eElement(rv reflect.Value) {
case reflect.Interface:
enc.eElement(rv.Elem())
default:
- encPanic(fmt.Errorf("unexpected primitive type: %T", rv.Interface()))
+ encPanic(fmt.Errorf("unexpected type: %T", rv.Interface()))
}
}
@@ -280,7 +324,7 @@ func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
- elem := rv.Index(i)
+ elem := eindirect(rv.Index(i))
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
@@ -294,7 +338,7 @@ func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
- trv := rv.Index(i)
+ trv := eindirect(rv.Index(i))
if isNil(trv) {
continue
}
@@ -319,7 +363,7 @@ func (enc *Encoder) eTable(key Key, rv reflect.Value) {
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
- switch rv := eindirect(rv); rv.Kind() {
+ switch rv.Kind() {
case reflect.Map:
enc.eMap(key, rv, inline)
case reflect.Struct:
@@ -341,7 +385,7 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
- if typeIsTable(tomlTypeOfGo(rv.MapIndex(mapKey))) {
+ if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
@@ -351,7 +395,7 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
for i, mapKey := range mapKeys {
- val := rv.MapIndex(reflect.ValueOf(mapKey))
+ val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
if isNil(val) {
continue
}
@@ -379,6 +423,13 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
const is32Bit = (32 << (^uint(0) >> 63)) == 32
+func pointerTo(t reflect.Type) reflect.Type {
+ if t.Kind() == reflect.Ptr {
+ return pointerTo(t.Elem())
+ }
+ return t
+}
+
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table then all keys under it will be in that
@@ -395,31 +446,25 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
- if f.PkgPath != "" && !f.Anonymous { /// Skip unexported fields.
+ isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
+ if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
+ continue
+ }
+ opts := getOptions(f.Tag)
+ if opts.skip {
continue
}
- frv := rv.Field(i)
+ frv := eindirect(rv.Field(i))
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
//
// Non-struct anonymous fields use the normal encoding logic.
- if f.Anonymous {
- t := f.Type
- switch t.Kind() {
- case reflect.Struct:
- if getOptions(f.Tag).name == "" {
- addFields(t, frv, append(start, f.Index...))
- continue
- }
- case reflect.Ptr:
- if t.Elem().Kind() == reflect.Struct && getOptions(f.Tag).name == "" {
- if !frv.IsNil() {
- addFields(t.Elem(), frv.Elem(), append(start, f.Index...))
- }
- continue
- }
+ if isEmbed {
+ if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
+ addFields(frv.Type(), frv, append(start, f.Index...))
+ continue
}
}
@@ -445,7 +490,7 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
writeFields := func(fields [][]int) {
for _, fieldIndex := range fields {
fieldType := rt.FieldByIndex(fieldIndex)
- fieldVal := rv.FieldByIndex(fieldIndex)
+ fieldVal := eindirect(rv.FieldByIndex(fieldIndex))
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
@@ -459,7 +504,8 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
if opts.name != "" {
keyName = opts.name
}
- if opts.omitempty && isEmpty(fieldVal) {
+
+ if opts.omitempty && enc.isEmpty(fieldVal) {
continue
}
if opts.omitzero && isZero(fieldVal) {
@@ -498,6 +544,21 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
+
+ if rv.Kind() == reflect.Struct {
+ if rv.Type() == timeType {
+ return tomlDatetime
+ }
+ if isMarshaler(rv) {
+ return tomlString
+ }
+ return tomlHash
+ }
+
+ if isMarshaler(rv) {
+ return tomlString
+ }
+
switch rv.Kind() {
case reflect.Bool:
return tomlBool
@@ -509,7 +570,7 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
- if typeEqual(tomlHash, tomlArrayType(rv)) {
+ if isTableArray(rv) {
return tomlArrayHash
}
return tomlArray
@@ -519,67 +580,35 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
return tomlString
case reflect.Map:
return tomlHash
- case reflect.Struct:
- if _, ok := rv.Interface().(time.Time); ok {
- return tomlDatetime
- }
- if isMarshaler(rv) {
- return tomlString
- }
- return tomlHash
default:
- if isMarshaler(rv) {
- return tomlString
- }
-
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
panic("unreachable")
}
}
func isMarshaler(rv reflect.Value) bool {
- switch rv.Interface().(type) {
- case encoding.TextMarshaler:
- return true
- case Marshaler:
- return true
- }
-
- // Someone used a pointer receiver: we can make it work for pointer values.
- if rv.CanAddr() {
- if _, ok := rv.Addr().Interface().(encoding.TextMarshaler); ok {
- return true
- }
- if _, ok := rv.Addr().Interface().(Marshaler); ok {
- return true
- }
- }
- return false
+ return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
}
-// tomlArrayType returns the element type of a TOML array. The type returned
-// may be nil if it cannot be determined (e.g., a nil slice or a zero length
-// slize). This function may also panic if it finds a type that cannot be
-// expressed in TOML (such as nil elements, heterogeneous arrays or directly
-// nested arrays of tables).
-func tomlArrayType(rv reflect.Value) tomlType {
- if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
- return nil
+// isTableArray reports if all entries in the array or slice are a table.
+func isTableArray(arr reflect.Value) bool {
+ if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
+ return false
}
- /// Don't allow nil.
- rvlen := rv.Len()
- for i := 1; i < rvlen; i++ {
- if tomlTypeOfGo(rv.Index(i)) == nil {
+ ret := true
+ for i := 0; i < arr.Len(); i++ {
+ tt := tomlTypeOfGo(eindirect(arr.Index(i)))
+ // Don't allow nil.
+ if tt == nil {
encPanic(errArrayNilElement)
}
- }
- firstType := tomlTypeOfGo(rv.Index(0))
- if firstType == nil {
- encPanic(errArrayNilElement)
+ if ret && !typeEqual(tomlHash, tt) {
+ ret = false
+ }
}
- return firstType
+ return ret
}
type tagOptions struct {
@@ -620,10 +649,26 @@ func isZero(rv reflect.Value) bool {
return false
}
-func isEmpty(rv reflect.Value) bool {
+func (enc *Encoder) isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
+ case reflect.Struct:
+ if rv.Type().Comparable() {
+ return reflect.Zero(rv.Type()).Interface() == rv.Interface()
+ }
+ // Need to also check if all the fields are empty, otherwise something
+ // like this with uncomparable types will always return true:
+ //
+ // type a struct{ field b }
+ // type b struct{ s []string }
+ // s := a{field: b{s: []string{"AAA"}}}
+ for i := 0; i < rv.NumField(); i++ {
+ if !enc.isEmpty(rv.Field(i)) {
+ return false
+ }
+ }
+ return true
case reflect.Bool:
return !rv.Bool()
}
@@ -638,16 +683,15 @@ func (enc *Encoder) newline() {
// Write a key/value pair:
//
-// key =
+// key =
//
// This is also used for "k = v" in inline tables; so something like this will
// be written in three calls:
//
-// ┌────────────────────┐
-// │ ┌───┐ ┌─────┐│
-// v v v v vv
-// key = {k = v, k2 = v2}
-//
+// ┌───────────────────┐
+// │ ┌───┐ ┌────┐│
+// v v v v vv
+// key = {k = 1, k2 = 2}
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
if len(key) == 0 {
encPanic(errNoKey)
@@ -675,13 +719,25 @@ func encPanic(err error) {
panic(tomlEncodeError{err})
}
+// Resolve any level of pointers to the actual value (e.g. **string → string).
func eindirect(v reflect.Value) reflect.Value {
- switch v.Kind() {
- case reflect.Ptr, reflect.Interface:
- return eindirect(v.Elem())
- default:
+ if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
+ if isMarshaler(v) {
+ return v
+ }
+ if v.CanAddr() { /// Special case for marshalers; see #358.
+ if pv := v.Addr(); isMarshaler(pv) {
+ return pv
+ }
+ }
return v
}
+
+ if v.IsNil() {
+ return v
+ }
+
+ return eindirect(v.Elem())
}
func isNil(rv reflect.Value) bool {
diff --git a/vendor/github.com/BurntSushi/toml/error.go b/vendor/github.com/BurntSushi/toml/error.go
index 36edc46554ed..f4f390e647fd 100644
--- a/vendor/github.com/BurntSushi/toml/error.go
+++ b/vendor/github.com/BurntSushi/toml/error.go
@@ -5,57 +5,60 @@ import (
"strings"
)
-// ParseError is returned when there is an error parsing the TOML syntax.
-//
-// For example invalid syntax, duplicate keys, etc.
+// ParseError is returned when there is an error parsing the TOML syntax such as
+// invalid syntax, duplicate keys, etc.
//
// In addition to the error message itself, you can also print detailed location
-// information with context by using ErrorWithLocation():
+// information with context by using [ErrorWithPosition]:
//
-// toml: error: Key 'fruit' was already created and cannot be used as an array.
+// toml: error: Key 'fruit' was already created and cannot be used as an array.
//
-// At line 4, column 2-7:
+// At line 4, column 2-7:
//
-// 2 | fruit = []
-// 3 |
-// 4 | [[fruit]] # Not allowed
-// ^^^^^
+// 2 | fruit = []
+// 3 |
+// 4 | [[fruit]] # Not allowed
+// ^^^^^
//
-// Furthermore, the ErrorWithUsage() can be used to print the above with some
-// more detailed usage guidance:
+// [ErrorWithUsage] can be used to print the above with some more detailed usage
+// guidance:
//
-// toml: error: newlines not allowed within inline tables
+// toml: error: newlines not allowed within inline tables
//
-// At line 1, column 18:
+// At line 1, column 18:
//
-// 1 | x = [{ key = 42 #
-// ^
+// 1 | x = [{ key = 42 #
+// ^
//
-// Error help:
+// Error help:
//
-// Inline tables must always be on a single line:
+// Inline tables must always be on a single line:
//
-// table = {key = 42, second = 43}
+// table = {key = 42, second = 43}
//
-// It is invalid to split them over multiple lines like so:
+// It is invalid to split them over multiple lines like so:
//
-// # INVALID
-// table = {
-// key = 42,
-// second = 43
-// }
+// # INVALID
+// table = {
+// key = 42,
+// second = 43
+// }
//
-// Use regular for this:
+// Use regular for this:
//
-// [table]
-// key = 42
-// second = 43
+// [table]
+// key = 42
+// second = 43
type ParseError struct {
Message string // Short technical message.
Usage string // Longer message with usage guidance; may be blank.
Position Position // Position of the error
LastKey string // Last parsed key, may be blank.
- Line int // Line the error occurred. Deprecated: use Position.
+
+ // Line the error occurred.
+ //
+ // Deprecated: use [Position].
+ Line int
err error
input string
@@ -83,7 +86,7 @@ func (pe ParseError) Error() string {
// ErrorWithUsage() returns the error with detailed location context.
//
-// See the documentation on ParseError.
+// See the documentation on [ParseError].
func (pe ParseError) ErrorWithPosition() string {
if pe.input == "" { // Should never happen, but just in case.
return pe.Error()
@@ -124,13 +127,17 @@ func (pe ParseError) ErrorWithPosition() string {
// ErrorWithUsage() returns the error with detailed location context and usage
// guidance.
//
-// See the documentation on ParseError.
+// See the documentation on [ParseError].
func (pe ParseError) ErrorWithUsage() string {
m := pe.ErrorWithPosition()
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
- return m + "Error help:\n\n " +
- strings.ReplaceAll(strings.TrimSpace(u.Usage()), "\n", "\n ") +
- "\n"
+ lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
+ for i := range lines {
+ if lines[i] != "" {
+ lines[i] = " " + lines[i]
+ }
+ }
+ return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
}
return m
}
@@ -160,6 +167,11 @@ type (
errLexInvalidDate struct{ v string }
errLexInlineTableNL struct{}
errLexStringNL struct{}
+ errParseRange struct {
+ i interface{} // int or float
+ size string // "int64", "uint16", etc.
+ }
+ errParseDuration struct{ d string }
)
func (e errLexControl) Error() string {
@@ -179,6 +191,10 @@ func (e errLexInlineTableNL) Error() string { return "newlines not allowed withi
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
func (e errLexStringNL) Usage() string { return usageStringNewline }
+func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
+func (e errParseRange) Usage() string { return usageIntOverflow }
+func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
+func (e errParseDuration) Usage() string { return usageDuration }
const usageEscape = `
A '\' inside a "-delimited string is interpreted as an escape character.
@@ -227,3 +243,37 @@ Instead use """ or ''' to split strings over multiple lines:
string = """Hello,
world!"""
`
+
+const usageIntOverflow = `
+This number is too large; this may be an error in the TOML, but it can also be a
+bug in the program that uses too small of an integer.
+
+The maximum and minimum values are:
+
+ size │ lowest │ highest
+ ───────┼────────────────┼──────────
+ int8 │ -128 │ 127
+ int16 │ -32,768 │ 32,767
+ int32 │ -2,147,483,648 │ 2,147,483,647
+ int64 │ -9.2 × 10¹⁷ │ 9.2 × 10¹⁷
+ uint8 │ 0 │ 255
+ uint16 │ 0 │ 65535
+ uint32 │ 0 │ 4294967295
+ uint64 │ 0 │ 1.8 × 10¹⁸
+
+int refers to int32 on 32-bit systems and int64 on 64-bit systems.
+`
+
+const usageDuration = `
+A duration must be as "number", without any spaces. Valid units are:
+
+ ns nanoseconds (billionth of a second)
+ us, µs microseconds (millionth of a second)
+ ms milliseconds (thousands of a second)
+ s seconds
+ m minutes
+ h hours
+
+You can combine multiple units; for example "5m10s" for 5 minutes and 10
+seconds.
+`
diff --git a/vendor/github.com/BurntSushi/toml/lex.go b/vendor/github.com/BurntSushi/toml/lex.go
index 63ef20f47453..d4d70871d8d7 100644
--- a/vendor/github.com/BurntSushi/toml/lex.go
+++ b/vendor/github.com/BurntSushi/toml/lex.go
@@ -82,7 +82,7 @@ func (lx *lexer) nextItem() item {
return item
default:
lx.state = lx.state(lx)
- //fmt.Printf(" STATE %-24s current: %-10q stack: %s\n", lx.state, lx.current(), lx.stack)
+ //fmt.Printf(" STATE %-24s current: %-10s stack: %s\n", lx.state, lx.current(), lx.stack)
}
}
}
@@ -128,6 +128,11 @@ func (lx lexer) getPos() Position {
}
func (lx *lexer) emit(typ itemType) {
+ // Needed for multiline strings ending with an incomplete UTF-8 sequence.
+ if lx.start > lx.pos {
+ lx.error(errLexUTF8{lx.input[lx.pos]})
+ return
+ }
lx.items <- item{typ: typ, pos: lx.getPos(), val: lx.current()}
lx.start = lx.pos
}
@@ -711,7 +716,17 @@ func lexMultilineString(lx *lexer) stateFn {
if lx.peek() == '"' {
/// Check if we already lexed 5 's; if so we have 6 now, and
/// that's just too many man!
- if strings.HasSuffix(lx.current(), `"""""`) {
+ ///
+ /// Second check is for the edge case:
+ ///
+ /// two quotes allowed.
+ /// vv
+ /// """lol \""""""
+ /// ^^ ^^^---- closing three
+ /// escaped
+ ///
+ /// But ugly, but it works
+ if strings.HasSuffix(lx.current(), `"""""`) && !strings.HasSuffix(lx.current(), `\"""""`) {
return lx.errorf(`unexpected '""""""'`)
}
lx.backup()
@@ -756,7 +771,7 @@ func lexRawString(lx *lexer) stateFn {
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
-// a string. It assumes that the beginning "'''" has already been consumed and
+// a string. It assumes that the beginning ''' has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
r := lx.next()
@@ -802,8 +817,7 @@ func lexMultilineRawString(lx *lexer) stateFn {
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
- // Handle the special case first:
- if isNL(lx.next()) {
+ if isNL(lx.next()) { /// \ escaping newline.
return lexMultilineString
}
lx.backup()
diff --git a/vendor/github.com/BurntSushi/toml/meta.go b/vendor/github.com/BurntSushi/toml/meta.go
index 868619fb9750..71847a04158c 100644
--- a/vendor/github.com/BurntSushi/toml/meta.go
+++ b/vendor/github.com/BurntSushi/toml/meta.go
@@ -12,10 +12,11 @@ import (
type MetaData struct {
context Key // Used only during decoding.
+ keyInfo map[string]keyInfo
mapping map[string]interface{}
- types map[string]tomlType
keys []Key
decoded map[string]struct{}
+ data []byte // Input file; for errors.
}
// IsDefined reports if the key exists in the TOML data.
@@ -50,8 +51,8 @@ func (md *MetaData) IsDefined(key ...string) bool {
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
- if typ, ok := md.types[Key(key).String()]; ok {
- return typ.typeString()
+ if ki, ok := md.keyInfo[Key(key).String()]; ok {
+ return ki.tomlType.typeString()
}
return ""
}
@@ -70,7 +71,7 @@ func (md *MetaData) Keys() []Key {
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
-// This includes keys that haven't been decoded because of a Primitive value.
+// This includes keys that haven't been decoded because of a [Primitive] value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
@@ -88,7 +89,7 @@ func (md *MetaData) Undecoded() []Key {
return undecoded
}
-// Key represents any TOML key, including key groups. Use (MetaData).Keys to get
+// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
// values of this type.
type Key []string
diff --git a/vendor/github.com/BurntSushi/toml/parse.go b/vendor/github.com/BurntSushi/toml/parse.go
index 8269cca17016..d2542d6f926f 100644
--- a/vendor/github.com/BurntSushi/toml/parse.go
+++ b/vendor/github.com/BurntSushi/toml/parse.go
@@ -16,12 +16,18 @@ type parser struct {
currentKey string // Base key name for everything except hashes.
pos Position // Current position in the TOML file.
- ordered []Key // List of keys in the order that they appear in the TOML data.
+ ordered []Key // List of keys in the order that they appear in the TOML data.
+
+ keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
mapping map[string]interface{} // Map keyname → key value.
- types map[string]tomlType // Map keyname → TOML type.
implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
}
+type keyInfo struct {
+ pos Position
+ tomlType tomlType
+}
+
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
@@ -57,8 +63,8 @@ func parse(data string) (p *parser, err error) {
}
p = &parser{
+ keyInfo: make(map[string]keyInfo),
mapping: make(map[string]interface{}),
- types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]struct{}),
@@ -74,6 +80,15 @@ func parse(data string) (p *parser, err error) {
return p, nil
}
+func (p *parser) panicErr(it item, err error) {
+ panic(ParseError{
+ err: err,
+ Position: it.pos,
+ Line: it.pos.Len,
+ LastKey: p.current(),
+ })
+}
+
func (p *parser) panicItemf(it item, format string, v ...interface{}) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
@@ -94,7 +109,7 @@ func (p *parser) panicf(format string, v ...interface{}) {
func (p *parser) next() item {
it := p.lx.nextItem()
- //fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.line, it.val)
+ //fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
if it.typ == itemError {
if it.err != nil {
panic(ParseError{
@@ -146,7 +161,7 @@ func (p *parser) topLevel(item item) {
p.assertEqual(itemTableEnd, name.typ)
p.addContext(key, false)
- p.setType("", tomlHash)
+ p.setType("", tomlHash, item.pos)
p.ordered = append(p.ordered, key)
case itemArrayTableStart: // [[ .. ]]
name := p.nextPos()
@@ -158,7 +173,7 @@ func (p *parser) topLevel(item item) {
p.assertEqual(itemArrayTableEnd, name.typ)
p.addContext(key, true)
- p.setType("", tomlArrayHash)
+ p.setType("", tomlArrayHash, item.pos)
p.ordered = append(p.ordered, key)
case itemKeyStart: // key = ..
outerContext := p.context
@@ -181,8 +196,9 @@ func (p *parser) topLevel(item item) {
}
/// Set value.
- val, typ := p.value(p.next(), false)
- p.set(p.currentKey, val, typ)
+ vItem := p.next()
+ val, typ := p.value(vItem, false)
+ p.set(p.currentKey, val, typ, vItem.pos)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Remove the context we added (preserving any context from [tbl] lines).
@@ -220,7 +236,7 @@ func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
case itemString:
return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
case itemMultilineString:
- return p.replaceEscapes(it, stripFirstNewline(stripEscapedNewlines(it.val))), p.typeOfPrimitive(it)
+ return p.replaceEscapes(it, stripFirstNewline(p.stripEscapedNewlines(it.val))), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
@@ -266,7 +282,7 @@ func (p *parser) valueInteger(it item) (interface{}, tomlType) {
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
- p.panicItemf(it, "Integer '%s' is out of the range of 64-bit signed integers.", it.val)
+ p.panicErr(it, errParseRange{i: it.val, size: "int64"})
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
@@ -304,7 +320,7 @@ func (p *parser) valueFloat(it item) (interface{}, tomlType) {
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
- p.panicItemf(it, "Float '%s' is out of the range of 64-bit IEEE-754 floating-point numbers.", it.val)
+ p.panicErr(it, errParseRange{i: it.val, size: "float64"})
} else {
p.panicItemf(it, "Invalid float value: %q", it.val)
}
@@ -343,9 +359,8 @@ func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
}
func (p *parser) valueArray(it item) (interface{}, tomlType) {
- p.setType(p.currentKey, tomlArray)
+ p.setType(p.currentKey, tomlArray, it.pos)
- // p.setType(p.currentKey, typ)
var (
types []tomlType
@@ -414,7 +429,7 @@ func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tom
/// Set the value.
val, typ := p.value(p.next(), false)
- p.set(p.currentKey, val, typ)
+ p.set(p.currentKey, val, typ, it.pos)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[p.currentKey] = val
@@ -533,9 +548,10 @@ func (p *parser) addContext(key Key, array bool) {
}
// set calls setValue and setType.
-func (p *parser) set(key string, val interface{}, typ tomlType) {
+func (p *parser) set(key string, val interface{}, typ tomlType, pos Position) {
p.setValue(key, val)
- p.setType(key, typ)
+ p.setType(key, typ, pos)
+
}
// setValue sets the given key to the given value in the current context.
@@ -599,7 +615,7 @@ func (p *parser) setValue(key string, value interface{}) {
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
-func (p *parser) setType(key string, typ tomlType) {
+func (p *parser) setType(key string, typ tomlType, pos Position) {
keyContext := make(Key, 0, len(p.context)+1)
keyContext = append(keyContext, p.context...)
if len(key) > 0 { // allow type setting for hashes
@@ -611,7 +627,7 @@ func (p *parser) setType(key string, typ tomlType) {
if len(keyContext) == 0 {
keyContext = Key{""}
}
- p.types[keyContext.String()] = typ
+ p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
}
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
@@ -619,7 +635,7 @@ func (p *parser) setType(key string, typ tomlType) {
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
-func (p *parser) isArray(key Key) bool { return p.types[key.String()] == tomlArray }
+func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
func (p *parser) addImplicitContext(key Key) {
p.addImplicit(key)
p.addContext(key, false)
@@ -647,7 +663,7 @@ func stripFirstNewline(s string) string {
}
// Remove newlines inside triple-quoted strings if a line ends with "\".
-func stripEscapedNewlines(s string) string {
+func (p *parser) stripEscapedNewlines(s string) string {
split := strings.Split(s, "\n")
if len(split) < 1 {
return s
@@ -679,6 +695,10 @@ func stripEscapedNewlines(s string) string {
continue
}
+ if i == len(split)-1 {
+ p.panicf("invalid escape: '\\ '")
+ }
+
split[i] = line[:len(line)-1] // Remove \
if len(split)-1 > i {
split[i+1] = strings.TrimLeft(split[i+1], " \t\r")
@@ -706,10 +726,8 @@ func (p *parser) replaceEscapes(it item, str string) string {
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
- return ""
case ' ', '\t':
p.panicItemf(it, "invalid escape: '\\%c'", s[r])
- return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/LICENSE b/vendor/github.com/containerd/stargz-snapshotter/estargz/LICENSE
new file mode 100644
index 000000000000..d64569567334
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/build.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/build.go
new file mode 100644
index 000000000000..b071cea51dde
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/build.go
@@ -0,0 +1,690 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Copyright 2019 The Go Authors. All rights reserved.
+ Use of this source code is governed by a BSD-style
+ license that can be found in the LICENSE file.
+*/
+
+package estargz
+
+import (
+ "archive/tar"
+ "bytes"
+ "compress/gzip"
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "path"
+ "runtime"
+ "strings"
+ "sync"
+
+ "github.com/containerd/stargz-snapshotter/estargz/errorutil"
+ "github.com/klauspost/compress/zstd"
+ digest "github.com/opencontainers/go-digest"
+ "golang.org/x/sync/errgroup"
+)
+
+type options struct {
+ chunkSize int
+ compressionLevel int
+ prioritizedFiles []string
+ missedPrioritizedFiles *[]string
+ compression Compression
+ ctx context.Context
+ minChunkSize int
+}
+
+type Option func(o *options) error
+
+// WithChunkSize option specifies the chunk size of eStargz blob to build.
+func WithChunkSize(chunkSize int) Option {
+ return func(o *options) error {
+ o.chunkSize = chunkSize
+ return nil
+ }
+}
+
+// WithCompressionLevel option specifies the gzip compression level.
+// The default is gzip.BestCompression.
+// This option will be ignored if WithCompression option is used.
+// See also: https://godoc.org/compress/gzip#pkg-constants
+func WithCompressionLevel(level int) Option {
+ return func(o *options) error {
+ o.compressionLevel = level
+ return nil
+ }
+}
+
+// WithPrioritizedFiles option specifies the list of prioritized files.
+// These files must be complete paths that are absolute or relative to "/"
+// For example, all of "foo/bar", "/foo/bar", "./foo/bar" and "../foo/bar"
+// are treated as "/foo/bar".
+func WithPrioritizedFiles(files []string) Option {
+ return func(o *options) error {
+ o.prioritizedFiles = files
+ return nil
+ }
+}
+
+// WithAllowPrioritizeNotFound makes Build continue the execution even if some
+// of prioritized files specified by WithPrioritizedFiles option aren't found
+// in the input tar. Instead, this records all missed file names to the passed
+// slice.
+func WithAllowPrioritizeNotFound(missedFiles *[]string) Option {
+ return func(o *options) error {
+ if missedFiles == nil {
+ return fmt.Errorf("WithAllowPrioritizeNotFound: slice must be passed")
+ }
+ o.missedPrioritizedFiles = missedFiles
+ return nil
+ }
+}
+
+// WithCompression specifies compression algorithm to be used.
+// Default is gzip.
+func WithCompression(compression Compression) Option {
+ return func(o *options) error {
+ o.compression = compression
+ return nil
+ }
+}
+
+// WithContext specifies a context that can be used for clean canceleration.
+func WithContext(ctx context.Context) Option {
+ return func(o *options) error {
+ o.ctx = ctx
+ return nil
+ }
+}
+
+// WithMinChunkSize option specifies the minimal number of bytes of data
+// must be written in one gzip stream.
+// By increasing this number, one gzip stream can contain multiple files
+// and it hopefully leads to smaller result blob.
+// NOTE: This adds a TOC property that old reader doesn't understand.
+func WithMinChunkSize(minChunkSize int) Option {
+ return func(o *options) error {
+ o.minChunkSize = minChunkSize
+ return nil
+ }
+}
+
+// Blob is an eStargz blob.
+type Blob struct {
+ io.ReadCloser
+ diffID digest.Digester
+ tocDigest digest.Digest
+}
+
+// DiffID returns the digest of uncompressed blob.
+// It is only valid to call DiffID after Close.
+func (b *Blob) DiffID() digest.Digest {
+ return b.diffID.Digest()
+}
+
+// TOCDigest returns the digest of uncompressed TOC JSON.
+func (b *Blob) TOCDigest() digest.Digest {
+ return b.tocDigest
+}
+
+// Build builds an eStargz blob which is an extended version of stargz, from a blob (gzip, zstd
+// or plain tar) passed through the argument. If there are some prioritized files are listed in
+// the option, these files are grouped as "prioritized" and can be used for runtime optimization
+// (e.g. prefetch). This function builds a blob in parallel, with dividing that blob into several
+// (at least the number of runtime.GOMAXPROCS(0)) sub-blobs.
+func Build(tarBlob *io.SectionReader, opt ...Option) (_ *Blob, rErr error) {
+ var opts options
+ opts.compressionLevel = gzip.BestCompression // BestCompression by default
+ for _, o := range opt {
+ if err := o(&opts); err != nil {
+ return nil, err
+ }
+ }
+ if opts.compression == nil {
+ opts.compression = newGzipCompressionWithLevel(opts.compressionLevel)
+ }
+ layerFiles := newTempFiles()
+ ctx := opts.ctx
+ if ctx == nil {
+ ctx = context.Background()
+ }
+ done := make(chan struct{})
+ defer close(done)
+ go func() {
+ select {
+ case <-done:
+ // nop
+ case <-ctx.Done():
+ layerFiles.CleanupAll()
+ }
+ }()
+ defer func() {
+ if rErr != nil {
+ if err := layerFiles.CleanupAll(); err != nil {
+ rErr = fmt.Errorf("failed to cleanup tmp files: %v: %w", err, rErr)
+ }
+ }
+ if cErr := ctx.Err(); cErr != nil {
+ rErr = fmt.Errorf("error from context %q: %w", cErr, rErr)
+ }
+ }()
+ tarBlob, err := decompressBlob(tarBlob, layerFiles)
+ if err != nil {
+ return nil, err
+ }
+ entries, err := sortEntries(tarBlob, opts.prioritizedFiles, opts.missedPrioritizedFiles)
+ if err != nil {
+ return nil, err
+ }
+ var tarParts [][]*entry
+ if opts.minChunkSize > 0 {
+ // Each entry needs to know the size of the current gzip stream so they
+ // cannot be processed in parallel.
+ tarParts = [][]*entry{entries}
+ } else {
+ tarParts = divideEntries(entries, runtime.GOMAXPROCS(0))
+ }
+ writers := make([]*Writer, len(tarParts))
+ payloads := make([]*os.File, len(tarParts))
+ var mu sync.Mutex
+ var eg errgroup.Group
+ for i, parts := range tarParts {
+ i, parts := i, parts
+ // builds verifiable stargz sub-blobs
+ eg.Go(func() error {
+ esgzFile, err := layerFiles.TempFile("", "esgzdata")
+ if err != nil {
+ return err
+ }
+ sw := NewWriterWithCompressor(esgzFile, opts.compression)
+ sw.ChunkSize = opts.chunkSize
+ sw.MinChunkSize = opts.minChunkSize
+ if sw.needsOpenGzEntries == nil {
+ sw.needsOpenGzEntries = make(map[string]struct{})
+ }
+ for _, f := range []string{PrefetchLandmark, NoPrefetchLandmark} {
+ sw.needsOpenGzEntries[f] = struct{}{}
+ }
+ if err := sw.AppendTar(readerFromEntries(parts...)); err != nil {
+ return err
+ }
+ mu.Lock()
+ writers[i] = sw
+ payloads[i] = esgzFile
+ mu.Unlock()
+ return nil
+ })
+ }
+ if err := eg.Wait(); err != nil {
+ rErr = err
+ return nil, err
+ }
+ tocAndFooter, tocDgst, err := closeWithCombine(writers...)
+ if err != nil {
+ rErr = err
+ return nil, err
+ }
+ var rs []io.Reader
+ for _, p := range payloads {
+ fs, err := fileSectionReader(p)
+ if err != nil {
+ return nil, err
+ }
+ rs = append(rs, fs)
+ }
+ diffID := digest.Canonical.Digester()
+ pr, pw := io.Pipe()
+ go func() {
+ r, err := opts.compression.Reader(io.TeeReader(io.MultiReader(append(rs, tocAndFooter)...), pw))
+ if err != nil {
+ pw.CloseWithError(err)
+ return
+ }
+ defer r.Close()
+ if _, err := io.Copy(diffID.Hash(), r); err != nil {
+ pw.CloseWithError(err)
+ return
+ }
+ pw.Close()
+ }()
+ return &Blob{
+ ReadCloser: readCloser{
+ Reader: pr,
+ closeFunc: layerFiles.CleanupAll,
+ },
+ tocDigest: tocDgst,
+ diffID: diffID,
+ }, nil
+}
+
+// closeWithCombine takes unclosed Writers and close them. This also returns the
+// toc that combined all Writers into.
+// Writers doesn't write TOC and footer to the underlying writers so they can be
+// combined into a single eStargz and tocAndFooter returned by this function can
+// be appended at the tail of that combined blob.
+func closeWithCombine(ws ...*Writer) (tocAndFooterR io.Reader, tocDgst digest.Digest, err error) {
+ if len(ws) == 0 {
+ return nil, "", fmt.Errorf("at least one writer must be passed")
+ }
+ for _, w := range ws {
+ if w.closed {
+ return nil, "", fmt.Errorf("writer must be unclosed")
+ }
+ defer func(w *Writer) { w.closed = true }(w)
+ if err := w.closeGz(); err != nil {
+ return nil, "", err
+ }
+ if err := w.bw.Flush(); err != nil {
+ return nil, "", err
+ }
+ }
+ var (
+ mtoc = new(JTOC)
+ currentOffset int64
+ )
+ mtoc.Version = ws[0].toc.Version
+ for _, w := range ws {
+ for _, e := range w.toc.Entries {
+ // Recalculate Offset of non-empty files/chunks
+ if (e.Type == "reg" && e.Size > 0) || e.Type == "chunk" {
+ e.Offset += currentOffset
+ }
+ mtoc.Entries = append(mtoc.Entries, e)
+ }
+ if w.toc.Version > mtoc.Version {
+ mtoc.Version = w.toc.Version
+ }
+ currentOffset += w.cw.n
+ }
+
+ return tocAndFooter(ws[0].compressor, mtoc, currentOffset)
+}
+
+func tocAndFooter(compressor Compressor, toc *JTOC, offset int64) (io.Reader, digest.Digest, error) {
+ buf := new(bytes.Buffer)
+ tocDigest, err := compressor.WriteTOCAndFooter(buf, offset, toc, nil)
+ if err != nil {
+ return nil, "", err
+ }
+ return buf, tocDigest, nil
+}
+
+// divideEntries divides passed entries to the parts at least the number specified by the
+// argument.
+func divideEntries(entries []*entry, minPartsNum int) (set [][]*entry) {
+ var estimatedSize int64
+ for _, e := range entries {
+ estimatedSize += e.header.Size
+ }
+ unitSize := estimatedSize / int64(minPartsNum)
+ var (
+ nextEnd = unitSize
+ offset int64
+ )
+ set = append(set, []*entry{})
+ for _, e := range entries {
+ set[len(set)-1] = append(set[len(set)-1], e)
+ offset += e.header.Size
+ if offset > nextEnd {
+ set = append(set, []*entry{})
+ nextEnd += unitSize
+ }
+ }
+ return
+}
+
+var errNotFound = errors.New("not found")
+
+// sortEntries reads the specified tar blob and returns a list of tar entries.
+// If some of prioritized files are specified, the list starts from these
+// files with keeping the order specified by the argument.
+func sortEntries(in io.ReaderAt, prioritized []string, missedPrioritized *[]string) ([]*entry, error) {
+
+ // Import tar file.
+ intar, err := importTar(in)
+ if err != nil {
+ return nil, fmt.Errorf("failed to sort: %w", err)
+ }
+
+ // Sort the tar file respecting to the prioritized files list.
+ sorted := &tarFile{}
+ for _, l := range prioritized {
+ if err := moveRec(l, intar, sorted); err != nil {
+ if errors.Is(err, errNotFound) && missedPrioritized != nil {
+ *missedPrioritized = append(*missedPrioritized, l)
+ continue // allow not found
+ }
+ return nil, fmt.Errorf("failed to sort tar entries: %w", err)
+ }
+ }
+ if len(prioritized) == 0 {
+ sorted.add(&entry{
+ header: &tar.Header{
+ Name: NoPrefetchLandmark,
+ Typeflag: tar.TypeReg,
+ Size: int64(len([]byte{landmarkContents})),
+ },
+ payload: bytes.NewReader([]byte{landmarkContents}),
+ })
+ } else {
+ sorted.add(&entry{
+ header: &tar.Header{
+ Name: PrefetchLandmark,
+ Typeflag: tar.TypeReg,
+ Size: int64(len([]byte{landmarkContents})),
+ },
+ payload: bytes.NewReader([]byte{landmarkContents}),
+ })
+ }
+
+ // Dump all entry and concatinate them.
+ return append(sorted.dump(), intar.dump()...), nil
+}
+
+// readerFromEntries returns a reader of tar archive that contains entries passed
+// through the arguments.
+func readerFromEntries(entries ...*entry) io.Reader {
+ pr, pw := io.Pipe()
+ go func() {
+ tw := tar.NewWriter(pw)
+ defer tw.Close()
+ for _, entry := range entries {
+ if err := tw.WriteHeader(entry.header); err != nil {
+ pw.CloseWithError(fmt.Errorf("Failed to write tar header: %v", err))
+ return
+ }
+ if _, err := io.Copy(tw, entry.payload); err != nil {
+ pw.CloseWithError(fmt.Errorf("Failed to write tar payload: %v", err))
+ return
+ }
+ }
+ pw.Close()
+ }()
+ return pr
+}
+
+func importTar(in io.ReaderAt) (*tarFile, error) {
+ tf := &tarFile{}
+ pw, err := newCountReadSeeker(in)
+ if err != nil {
+ return nil, fmt.Errorf("failed to make position watcher: %w", err)
+ }
+ tr := tar.NewReader(pw)
+
+ // Walk through all nodes.
+ for {
+ // Fetch and parse next header.
+ h, err := tr.Next()
+ if err != nil {
+ if err == io.EOF {
+ break
+ } else {
+ return nil, fmt.Errorf("failed to parse tar file, %w", err)
+ }
+ }
+ switch cleanEntryName(h.Name) {
+ case PrefetchLandmark, NoPrefetchLandmark:
+ // Ignore existing landmark
+ continue
+ }
+
+ // Add entry. If it already exists, replace it.
+ if _, ok := tf.get(h.Name); ok {
+ tf.remove(h.Name)
+ }
+ tf.add(&entry{
+ header: h,
+ payload: io.NewSectionReader(in, pw.currentPos(), h.Size),
+ })
+ }
+
+ return tf, nil
+}
+
+func moveRec(name string, in *tarFile, out *tarFile) error {
+ name = cleanEntryName(name)
+ if name == "" { // root directory. stop recursion.
+ if e, ok := in.get(name); ok {
+ // entry of the root directory exists. we should move it as well.
+ // this case will occur if tar entries are prefixed with "./", "/", etc.
+ out.add(e)
+ in.remove(name)
+ }
+ return nil
+ }
+
+ _, okIn := in.get(name)
+ _, okOut := out.get(name)
+ if !okIn && !okOut {
+ return fmt.Errorf("file: %q: %w", name, errNotFound)
+ }
+
+ parent, _ := path.Split(strings.TrimSuffix(name, "/"))
+ if err := moveRec(parent, in, out); err != nil {
+ return err
+ }
+ if e, ok := in.get(name); ok && e.header.Typeflag == tar.TypeLink {
+ if err := moveRec(e.header.Linkname, in, out); err != nil {
+ return err
+ }
+ }
+ if e, ok := in.get(name); ok {
+ out.add(e)
+ in.remove(name)
+ }
+ return nil
+}
+
+type entry struct {
+ header *tar.Header
+ payload io.ReadSeeker
+}
+
+type tarFile struct {
+ index map[string]*entry
+ stream []*entry
+}
+
+func (f *tarFile) add(e *entry) {
+ if f.index == nil {
+ f.index = make(map[string]*entry)
+ }
+ f.index[cleanEntryName(e.header.Name)] = e
+ f.stream = append(f.stream, e)
+}
+
+func (f *tarFile) remove(name string) {
+ name = cleanEntryName(name)
+ if f.index != nil {
+ delete(f.index, name)
+ }
+ var filtered []*entry
+ for _, e := range f.stream {
+ if cleanEntryName(e.header.Name) == name {
+ continue
+ }
+ filtered = append(filtered, e)
+ }
+ f.stream = filtered
+}
+
+func (f *tarFile) get(name string) (e *entry, ok bool) {
+ if f.index == nil {
+ return nil, false
+ }
+ e, ok = f.index[cleanEntryName(name)]
+ return
+}
+
+func (f *tarFile) dump() []*entry {
+ return f.stream
+}
+
+type readCloser struct {
+ io.Reader
+ closeFunc func() error
+}
+
+func (rc readCloser) Close() error {
+ return rc.closeFunc()
+}
+
+func fileSectionReader(file *os.File) (*io.SectionReader, error) {
+ info, err := file.Stat()
+ if err != nil {
+ return nil, err
+ }
+ return io.NewSectionReader(file, 0, info.Size()), nil
+}
+
+func newTempFiles() *tempFiles {
+ return &tempFiles{}
+}
+
+type tempFiles struct {
+ files []*os.File
+ filesMu sync.Mutex
+ cleanupOnce sync.Once
+}
+
+func (tf *tempFiles) TempFile(dir, pattern string) (*os.File, error) {
+ f, err := os.CreateTemp(dir, pattern)
+ if err != nil {
+ return nil, err
+ }
+ tf.filesMu.Lock()
+ tf.files = append(tf.files, f)
+ tf.filesMu.Unlock()
+ return f, nil
+}
+
+func (tf *tempFiles) CleanupAll() (err error) {
+ tf.cleanupOnce.Do(func() {
+ err = tf.cleanupAll()
+ })
+ return
+}
+
+func (tf *tempFiles) cleanupAll() error {
+ tf.filesMu.Lock()
+ defer tf.filesMu.Unlock()
+ var allErr []error
+ for _, f := range tf.files {
+ if err := f.Close(); err != nil {
+ allErr = append(allErr, err)
+ }
+ if err := os.Remove(f.Name()); err != nil {
+ allErr = append(allErr, err)
+ }
+ }
+ tf.files = nil
+ return errorutil.Aggregate(allErr)
+}
+
+func newCountReadSeeker(r io.ReaderAt) (*countReadSeeker, error) {
+ pos := int64(0)
+ return &countReadSeeker{r: r, cPos: &pos}, nil
+}
+
+type countReadSeeker struct {
+ r io.ReaderAt
+ cPos *int64
+
+ mu sync.Mutex
+}
+
+func (cr *countReadSeeker) Read(p []byte) (int, error) {
+ cr.mu.Lock()
+ defer cr.mu.Unlock()
+
+ n, err := cr.r.ReadAt(p, *cr.cPos)
+ if err == nil {
+ *cr.cPos += int64(n)
+ }
+ return n, err
+}
+
+func (cr *countReadSeeker) Seek(offset int64, whence int) (int64, error) {
+ cr.mu.Lock()
+ defer cr.mu.Unlock()
+
+ switch whence {
+ default:
+ return 0, fmt.Errorf("Unknown whence: %v", whence)
+ case io.SeekStart:
+ case io.SeekCurrent:
+ offset += *cr.cPos
+ case io.SeekEnd:
+ return 0, fmt.Errorf("Unsupported whence: %v", whence)
+ }
+
+ if offset < 0 {
+ return 0, fmt.Errorf("invalid offset")
+ }
+ *cr.cPos = offset
+ return offset, nil
+}
+
+func (cr *countReadSeeker) currentPos() int64 {
+ cr.mu.Lock()
+ defer cr.mu.Unlock()
+
+ return *cr.cPos
+}
+
+func decompressBlob(org *io.SectionReader, tmp *tempFiles) (*io.SectionReader, error) {
+ if org.Size() < 4 {
+ return org, nil
+ }
+ src := make([]byte, 4)
+ if _, err := org.Read(src); err != nil && err != io.EOF {
+ return nil, err
+ }
+ var dR io.Reader
+ if bytes.Equal([]byte{0x1F, 0x8B, 0x08}, src[:3]) {
+ // gzip
+ dgR, err := gzip.NewReader(io.NewSectionReader(org, 0, org.Size()))
+ if err != nil {
+ return nil, err
+ }
+ defer dgR.Close()
+ dR = io.Reader(dgR)
+ } else if bytes.Equal([]byte{0x28, 0xb5, 0x2f, 0xfd}, src[:4]) {
+ // zstd
+ dzR, err := zstd.NewReader(io.NewSectionReader(org, 0, org.Size()))
+ if err != nil {
+ return nil, err
+ }
+ defer dzR.Close()
+ dR = io.Reader(dzR)
+ } else {
+ // uncompressed
+ return io.NewSectionReader(org, 0, org.Size()), nil
+ }
+ b, err := tmp.TempFile("", "uncompresseddata")
+ if err != nil {
+ return nil, err
+ }
+ if _, err := io.Copy(b, dR); err != nil {
+ return nil, err
+ }
+ return fileSectionReader(b)
+}
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/errorutil/errors.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/errorutil/errors.go
new file mode 100644
index 000000000000..6de78b02dcdd
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/errorutil/errors.go
@@ -0,0 +1,40 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package errorutil
+
+import (
+ "errors"
+ "fmt"
+ "strings"
+)
+
+// Aggregate combines a list of errors into a single new error.
+func Aggregate(errs []error) error {
+ switch len(errs) {
+ case 0:
+ return nil
+ case 1:
+ return errs[0]
+ default:
+ points := make([]string, len(errs)+1)
+ points[0] = fmt.Sprintf("%d error(s) occurred:", len(errs))
+ for i, err := range errs {
+ points[i+1] = fmt.Sprintf("* %s", err)
+ }
+ return errors.New(strings.Join(points, "\n\t"))
+ }
+}
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/estargz.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/estargz.go
new file mode 100644
index 000000000000..f4d55465584e
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/estargz.go
@@ -0,0 +1,1223 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Copyright 2019 The Go Authors. All rights reserved.
+ Use of this source code is governed by a BSD-style
+ license that can be found in the LICENSE file.
+*/
+
+package estargz
+
+import (
+ "bufio"
+ "bytes"
+ "compress/gzip"
+ "crypto/sha256"
+ "errors"
+ "fmt"
+ "hash"
+ "io"
+ "os"
+ "path"
+ "sort"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containerd/stargz-snapshotter/estargz/errorutil"
+ digest "github.com/opencontainers/go-digest"
+ "github.com/vbatts/tar-split/archive/tar"
+)
+
+// A Reader permits random access reads from a stargz file.
+type Reader struct {
+ sr *io.SectionReader
+ toc *JTOC
+ tocDigest digest.Digest
+
+ // m stores all non-chunk entries, keyed by name.
+ m map[string]*TOCEntry
+
+ // chunks stores all TOCEntry values for regular files that
+ // are split up. For a file with a single chunk, it's only
+ // stored in m.
+ chunks map[string][]*TOCEntry
+
+ decompressor Decompressor
+}
+
+type openOpts struct {
+ tocOffset int64
+ decompressors []Decompressor
+ telemetry *Telemetry
+}
+
+// OpenOption is an option used during opening the layer
+type OpenOption func(o *openOpts) error
+
+// WithTOCOffset option specifies the offset of TOC
+func WithTOCOffset(tocOffset int64) OpenOption {
+ return func(o *openOpts) error {
+ o.tocOffset = tocOffset
+ return nil
+ }
+}
+
+// WithDecompressors option specifies decompressors to use.
+// Default is gzip-based decompressor.
+func WithDecompressors(decompressors ...Decompressor) OpenOption {
+ return func(o *openOpts) error {
+ o.decompressors = decompressors
+ return nil
+ }
+}
+
+// WithTelemetry option specifies the telemetry hooks
+func WithTelemetry(telemetry *Telemetry) OpenOption {
+ return func(o *openOpts) error {
+ o.telemetry = telemetry
+ return nil
+ }
+}
+
+// MeasureLatencyHook is a func which takes start time and records the diff
+type MeasureLatencyHook func(time.Time)
+
+// Telemetry is a struct which defines telemetry hooks. By implementing these hooks you should be able to record
+// the latency metrics of the respective steps of estargz open operation. To be used with estargz.OpenWithTelemetry(...)
+type Telemetry struct {
+ GetFooterLatency MeasureLatencyHook // measure time to get stargz footer (in milliseconds)
+ GetTocLatency MeasureLatencyHook // measure time to GET TOC JSON (in milliseconds)
+ DeserializeTocLatency MeasureLatencyHook // measure time to deserialize TOC JSON (in milliseconds)
+}
+
+// Open opens a stargz file for reading.
+// The behavior is configurable using options.
+//
+// Note that each entry name is normalized as the path that is relative to root.
+func Open(sr *io.SectionReader, opt ...OpenOption) (*Reader, error) {
+ var opts openOpts
+ for _, o := range opt {
+ if err := o(&opts); err != nil {
+ return nil, err
+ }
+ }
+
+ gzipCompressors := []Decompressor{new(GzipDecompressor), new(LegacyGzipDecompressor)}
+ decompressors := append(gzipCompressors, opts.decompressors...)
+
+ // Determine the size to fetch. Try to fetch as many bytes as possible.
+ fetchSize := maxFooterSize(sr.Size(), decompressors...)
+ if maybeTocOffset := opts.tocOffset; maybeTocOffset > fetchSize {
+ if maybeTocOffset > sr.Size() {
+ return nil, fmt.Errorf("blob size %d is smaller than the toc offset", sr.Size())
+ }
+ fetchSize = sr.Size() - maybeTocOffset
+ }
+
+ start := time.Now() // before getting layer footer
+ footer := make([]byte, fetchSize)
+ if _, err := sr.ReadAt(footer, sr.Size()-fetchSize); err != nil {
+ return nil, fmt.Errorf("error reading footer: %v", err)
+ }
+ if opts.telemetry != nil && opts.telemetry.GetFooterLatency != nil {
+ opts.telemetry.GetFooterLatency(start)
+ }
+
+ var allErr []error
+ var found bool
+ var r *Reader
+ for _, d := range decompressors {
+ fSize := d.FooterSize()
+ fOffset := positive(int64(len(footer)) - fSize)
+ maybeTocBytes := footer[:fOffset]
+ _, tocOffset, tocSize, err := d.ParseFooter(footer[fOffset:])
+ if err != nil {
+ allErr = append(allErr, err)
+ continue
+ }
+ if tocOffset >= 0 && tocSize <= 0 {
+ tocSize = sr.Size() - tocOffset - fSize
+ }
+ if tocOffset >= 0 && tocSize < int64(len(maybeTocBytes)) {
+ maybeTocBytes = maybeTocBytes[:tocSize]
+ }
+ r, err = parseTOC(d, sr, tocOffset, tocSize, maybeTocBytes, opts)
+ if err == nil {
+ found = true
+ break
+ }
+ allErr = append(allErr, err)
+ }
+ if !found {
+ return nil, errorutil.Aggregate(allErr)
+ }
+ if err := r.initFields(); err != nil {
+ return nil, fmt.Errorf("failed to initialize fields of entries: %v", err)
+ }
+ return r, nil
+}
+
+// OpenFooter extracts and parses footer from the given blob.
+// only supports gzip-based eStargz.
+func OpenFooter(sr *io.SectionReader) (tocOffset int64, footerSize int64, rErr error) {
+ if sr.Size() < FooterSize && sr.Size() < legacyFooterSize {
+ return 0, 0, fmt.Errorf("blob size %d is smaller than the footer size", sr.Size())
+ }
+ var footer [FooterSize]byte
+ if _, err := sr.ReadAt(footer[:], sr.Size()-FooterSize); err != nil {
+ return 0, 0, fmt.Errorf("error reading footer: %v", err)
+ }
+ var allErr []error
+ for _, d := range []Decompressor{new(GzipDecompressor), new(LegacyGzipDecompressor)} {
+ fSize := d.FooterSize()
+ fOffset := positive(int64(len(footer)) - fSize)
+ _, tocOffset, _, err := d.ParseFooter(footer[fOffset:])
+ if err == nil {
+ return tocOffset, fSize, err
+ }
+ allErr = append(allErr, err)
+ }
+ return 0, 0, errorutil.Aggregate(allErr)
+}
+
+// initFields populates the Reader from r.toc after decoding it from
+// JSON.
+//
+// Unexported fields are populated and TOCEntry fields that were
+// implicit in the JSON are populated.
+func (r *Reader) initFields() error {
+ r.m = make(map[string]*TOCEntry, len(r.toc.Entries))
+ r.chunks = make(map[string][]*TOCEntry)
+ var lastPath string
+ uname := map[int]string{}
+ gname := map[int]string{}
+ var lastRegEnt *TOCEntry
+ var chunkTopIndex int
+ for i, ent := range r.toc.Entries {
+ ent.Name = cleanEntryName(ent.Name)
+ switch ent.Type {
+ case "reg", "chunk":
+ if ent.Offset != r.toc.Entries[chunkTopIndex].Offset {
+ chunkTopIndex = i
+ }
+ ent.chunkTopIndex = chunkTopIndex
+ }
+ if ent.Type == "reg" {
+ lastRegEnt = ent
+ }
+ if ent.Type == "chunk" {
+ ent.Name = lastPath
+ r.chunks[ent.Name] = append(r.chunks[ent.Name], ent)
+ if ent.ChunkSize == 0 && lastRegEnt != nil {
+ ent.ChunkSize = lastRegEnt.Size - ent.ChunkOffset
+ }
+ } else {
+ lastPath = ent.Name
+
+ if ent.Uname != "" {
+ uname[ent.UID] = ent.Uname
+ } else {
+ ent.Uname = uname[ent.UID]
+ }
+ if ent.Gname != "" {
+ gname[ent.GID] = ent.Gname
+ } else {
+ ent.Gname = uname[ent.GID]
+ }
+
+ ent.modTime, _ = time.Parse(time.RFC3339, ent.ModTime3339)
+
+ if ent.Type == "dir" {
+ ent.NumLink++ // Parent dir links to this directory
+ }
+ r.m[ent.Name] = ent
+ }
+ if ent.Type == "reg" && ent.ChunkSize > 0 && ent.ChunkSize < ent.Size {
+ r.chunks[ent.Name] = make([]*TOCEntry, 0, ent.Size/ent.ChunkSize+1)
+ r.chunks[ent.Name] = append(r.chunks[ent.Name], ent)
+ }
+ if ent.ChunkSize == 0 && ent.Size != 0 {
+ ent.ChunkSize = ent.Size
+ }
+ }
+
+ // Populate children, add implicit directories:
+ for _, ent := range r.toc.Entries {
+ if ent.Type == "chunk" {
+ continue
+ }
+ // add "foo/":
+ // add "foo" child to "" (creating "" if necessary)
+ //
+ // add "foo/bar/":
+ // add "bar" child to "foo" (creating "foo" if necessary)
+ //
+ // add "foo/bar.txt":
+ // add "bar.txt" child to "foo" (creating "foo" if necessary)
+ //
+ // add "a/b/c/d/e/f.txt":
+ // create "a/b/c/d/e" node
+ // add "f.txt" child to "e"
+
+ name := ent.Name
+ pdirName := parentDir(name)
+ if name == pdirName {
+ // This entry and its parent are the same.
+ // Ignore this for avoiding infinite loop of the reference.
+ // The example case where this can occur is when tar contains the root
+ // directory itself (e.g. "./", "/").
+ continue
+ }
+ pdir := r.getOrCreateDir(pdirName)
+ ent.NumLink++ // at least one name(ent.Name) references this entry.
+ if ent.Type == "hardlink" {
+ org, err := r.getSource(ent)
+ if err != nil {
+ return err
+ }
+ org.NumLink++ // original entry is referenced by this ent.Name.
+ ent = org
+ }
+ pdir.addChild(path.Base(name), ent)
+ }
+
+ lastOffset := r.sr.Size()
+ for i := len(r.toc.Entries) - 1; i >= 0; i-- {
+ e := r.toc.Entries[i]
+ if e.isDataType() {
+ e.nextOffset = lastOffset
+ }
+ if e.Offset != 0 && e.InnerOffset == 0 {
+ lastOffset = e.Offset
+ }
+ }
+
+ return nil
+}
+
+func (r *Reader) getSource(ent *TOCEntry) (_ *TOCEntry, err error) {
+ if ent.Type == "hardlink" {
+ org, ok := r.m[cleanEntryName(ent.LinkName)]
+ if !ok {
+ return nil, fmt.Errorf("%q is a hardlink but the linkname %q isn't found", ent.Name, ent.LinkName)
+ }
+ ent, err = r.getSource(org)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return ent, nil
+}
+
+func parentDir(p string) string {
+ dir, _ := path.Split(p)
+ return strings.TrimSuffix(dir, "/")
+}
+
+func (r *Reader) getOrCreateDir(d string) *TOCEntry {
+ e, ok := r.m[d]
+ if !ok {
+ e = &TOCEntry{
+ Name: d,
+ Type: "dir",
+ Mode: 0755,
+ NumLink: 2, // The directory itself(.) and the parent link to this directory.
+ }
+ r.m[d] = e
+ if d != "" {
+ pdir := r.getOrCreateDir(parentDir(d))
+ pdir.addChild(path.Base(d), e)
+ }
+ }
+ return e
+}
+
+func (r *Reader) TOCDigest() digest.Digest {
+ return r.tocDigest
+}
+
+// VerifyTOC checks that the TOC JSON in the passed blob matches the
+// passed digests and that the TOC JSON contains digests for all chunks
+// contained in the blob. If the verification succceeds, this function
+// returns TOCEntryVerifier which holds all chunk digests in the stargz blob.
+func (r *Reader) VerifyTOC(tocDigest digest.Digest) (TOCEntryVerifier, error) {
+ // Verify the digest of TOC JSON
+ if r.tocDigest != tocDigest {
+ return nil, fmt.Errorf("invalid TOC JSON %q; want %q", r.tocDigest, tocDigest)
+ }
+ return r.Verifiers()
+}
+
+// Verifiers returns TOCEntryVerifier of this chunk. Use VerifyTOC instead in most cases
+// because this doesn't verify TOC.
+func (r *Reader) Verifiers() (TOCEntryVerifier, error) {
+ chunkDigestMap := make(map[int64]digest.Digest) // map from chunk offset to the chunk digest
+ regDigestMap := make(map[int64]digest.Digest) // map from chunk offset to the reg file digest
+ var chunkDigestMapIncomplete bool
+ var regDigestMapIncomplete bool
+ var containsChunk bool
+ for _, e := range r.toc.Entries {
+ if e.Type != "reg" && e.Type != "chunk" {
+ continue
+ }
+
+ // offset must be unique in stargz blob
+ _, dOK := chunkDigestMap[e.Offset]
+ _, rOK := regDigestMap[e.Offset]
+ if dOK || rOK {
+ return nil, fmt.Errorf("offset %d found twice", e.Offset)
+ }
+
+ if e.Type == "reg" {
+ if e.Size == 0 {
+ continue // ignores empty file
+ }
+
+ // record the digest of regular file payload
+ if e.Digest != "" {
+ d, err := digest.Parse(e.Digest)
+ if err != nil {
+ return nil, fmt.Errorf("failed to parse regular file digest %q: %w", e.Digest, err)
+ }
+ regDigestMap[e.Offset] = d
+ } else {
+ regDigestMapIncomplete = true
+ }
+ } else {
+ containsChunk = true // this layer contains "chunk" entries.
+ }
+
+ // "reg" also can contain ChunkDigest (e.g. when "reg" is the first entry of
+ // chunked file)
+ if e.ChunkDigest != "" {
+ d, err := digest.Parse(e.ChunkDigest)
+ if err != nil {
+ return nil, fmt.Errorf("failed to parse chunk digest %q: %w", e.ChunkDigest, err)
+ }
+ chunkDigestMap[e.Offset] = d
+ } else {
+ chunkDigestMapIncomplete = true
+ }
+ }
+
+ if chunkDigestMapIncomplete {
+ // Though some chunk digests are not found, if this layer doesn't contain
+ // "chunk"s and all digest of "reg" files are recorded, we can use them instead.
+ if !containsChunk && !regDigestMapIncomplete {
+ return &verifier{digestMap: regDigestMap}, nil
+ }
+ return nil, fmt.Errorf("some ChunkDigest not found in TOC JSON")
+ }
+
+ return &verifier{digestMap: chunkDigestMap}, nil
+}
+
+// verifier is an implementation of TOCEntryVerifier which holds verifiers keyed by
+// offset of the chunk.
+type verifier struct {
+ digestMap map[int64]digest.Digest
+ digestMapMu sync.Mutex
+}
+
+// Verifier returns a content verifier specified by TOCEntry.
+func (v *verifier) Verifier(ce *TOCEntry) (digest.Verifier, error) {
+ v.digestMapMu.Lock()
+ defer v.digestMapMu.Unlock()
+ d, ok := v.digestMap[ce.Offset]
+ if !ok {
+ return nil, fmt.Errorf("verifier for offset=%d,size=%d hasn't been registered",
+ ce.Offset, ce.ChunkSize)
+ }
+ return d.Verifier(), nil
+}
+
+// ChunkEntryForOffset returns the TOCEntry containing the byte of the
+// named file at the given offset within the file.
+// Name must be absolute path or one that is relative to root.
+func (r *Reader) ChunkEntryForOffset(name string, offset int64) (e *TOCEntry, ok bool) {
+ name = cleanEntryName(name)
+ e, ok = r.Lookup(name)
+ if !ok || !e.isDataType() {
+ return nil, false
+ }
+ ents := r.chunks[name]
+ if len(ents) < 2 {
+ if offset >= e.ChunkSize {
+ return nil, false
+ }
+ return e, true
+ }
+ i := sort.Search(len(ents), func(i int) bool {
+ e := ents[i]
+ return e.ChunkOffset >= offset || (offset > e.ChunkOffset && offset < e.ChunkOffset+e.ChunkSize)
+ })
+ if i == len(ents) {
+ return nil, false
+ }
+ return ents[i], true
+}
+
+// Lookup returns the Table of Contents entry for the given path.
+//
+// To get the root directory, use the empty string.
+// Path must be absolute path or one that is relative to root.
+func (r *Reader) Lookup(path string) (e *TOCEntry, ok bool) {
+ path = cleanEntryName(path)
+ if r == nil {
+ return
+ }
+ e, ok = r.m[path]
+ if ok && e.Type == "hardlink" {
+ var err error
+ e, err = r.getSource(e)
+ if err != nil {
+ return nil, false
+ }
+ }
+ return
+}
+
+// OpenFile returns the reader of the specified file payload.
+//
+// Name must be absolute path or one that is relative to root.
+func (r *Reader) OpenFile(name string) (*io.SectionReader, error) {
+ fr, err := r.newFileReader(name)
+ if err != nil {
+ return nil, err
+ }
+ return io.NewSectionReader(fr, 0, fr.size), nil
+}
+
+func (r *Reader) newFileReader(name string) (*fileReader, error) {
+ name = cleanEntryName(name)
+ ent, ok := r.Lookup(name)
+ if !ok {
+ // TODO: come up with some error plan. This is lazy:
+ return nil, &os.PathError{
+ Path: name,
+ Op: "OpenFile",
+ Err: os.ErrNotExist,
+ }
+ }
+ if ent.Type != "reg" {
+ return nil, &os.PathError{
+ Path: name,
+ Op: "OpenFile",
+ Err: errors.New("not a regular file"),
+ }
+ }
+ return &fileReader{
+ r: r,
+ size: ent.Size,
+ ents: r.getChunks(ent),
+ }, nil
+}
+
+func (r *Reader) OpenFileWithPreReader(name string, preRead func(*TOCEntry, io.Reader) error) (*io.SectionReader, error) {
+ fr, err := r.newFileReader(name)
+ if err != nil {
+ return nil, err
+ }
+ fr.preRead = preRead
+ return io.NewSectionReader(fr, 0, fr.size), nil
+}
+
+func (r *Reader) getChunks(ent *TOCEntry) []*TOCEntry {
+ if ents, ok := r.chunks[ent.Name]; ok {
+ return ents
+ }
+ return []*TOCEntry{ent}
+}
+
+type fileReader struct {
+ r *Reader
+ size int64
+ ents []*TOCEntry // 1 or more reg/chunk entries
+ preRead func(*TOCEntry, io.Reader) error
+}
+
+func (fr *fileReader) ReadAt(p []byte, off int64) (n int, err error) {
+ if off >= fr.size {
+ return 0, io.EOF
+ }
+ if off < 0 {
+ return 0, errors.New("invalid offset")
+ }
+ var i int
+ if len(fr.ents) > 1 {
+ i = sort.Search(len(fr.ents), func(i int) bool {
+ return fr.ents[i].ChunkOffset >= off
+ })
+ if i == len(fr.ents) {
+ i = len(fr.ents) - 1
+ }
+ }
+ ent := fr.ents[i]
+ if ent.ChunkOffset > off {
+ if i == 0 {
+ return 0, errors.New("internal error; first chunk offset is non-zero")
+ }
+ ent = fr.ents[i-1]
+ }
+
+ // If ent is a chunk of a large file, adjust the ReadAt
+ // offset by the chunk's offset.
+ off -= ent.ChunkOffset
+
+ finalEnt := fr.ents[len(fr.ents)-1]
+ compressedOff := ent.Offset
+ // compressedBytesRemain is the number of compressed bytes in this
+ // file remaining, over 1+ chunks.
+ compressedBytesRemain := finalEnt.NextOffset() - compressedOff
+
+ sr := io.NewSectionReader(fr.r.sr, compressedOff, compressedBytesRemain)
+
+ const maxRead = 2 << 20
+ var bufSize = maxRead
+ if compressedBytesRemain < maxRead {
+ bufSize = int(compressedBytesRemain)
+ }
+
+ br := bufio.NewReaderSize(sr, bufSize)
+ if _, err := br.Peek(bufSize); err != nil {
+ return 0, fmt.Errorf("fileReader.ReadAt.peek: %v", err)
+ }
+
+ dr, err := fr.r.decompressor.Reader(br)
+ if err != nil {
+ return 0, fmt.Errorf("fileReader.ReadAt.decompressor.Reader: %v", err)
+ }
+ defer dr.Close()
+
+ if fr.preRead == nil {
+ if n, err := io.CopyN(io.Discard, dr, ent.InnerOffset+off); n != ent.InnerOffset+off || err != nil {
+ return 0, fmt.Errorf("discard of %d bytes != %v, %v", ent.InnerOffset+off, n, err)
+ }
+ return io.ReadFull(dr, p)
+ }
+
+ var retN int
+ var retErr error
+ var found bool
+ var nr int64
+ for _, e := range fr.r.toc.Entries[ent.chunkTopIndex:] {
+ if !e.isDataType() {
+ continue
+ }
+ if e.Offset != fr.r.toc.Entries[ent.chunkTopIndex].Offset {
+ break
+ }
+ if in, err := io.CopyN(io.Discard, dr, e.InnerOffset-nr); err != nil || in != e.InnerOffset-nr {
+ return 0, fmt.Errorf("discard of remaining %d bytes != %v, %v", e.InnerOffset-nr, in, err)
+ }
+ nr = e.InnerOffset
+ if e == ent {
+ found = true
+ if n, err := io.CopyN(io.Discard, dr, off); n != off || err != nil {
+ return 0, fmt.Errorf("discard of offset %d bytes != %v, %v", off, n, err)
+ }
+ retN, retErr = io.ReadFull(dr, p)
+ nr += off + int64(retN)
+ continue
+ }
+ cr := &countReader{r: io.LimitReader(dr, e.ChunkSize)}
+ if err := fr.preRead(e, cr); err != nil {
+ return 0, fmt.Errorf("failed to pre read: %w", err)
+ }
+ nr += cr.n
+ }
+ if !found {
+ return 0, fmt.Errorf("fileReader.ReadAt: target entry not found")
+ }
+ return retN, retErr
+}
+
+// A Writer writes stargz files.
+//
+// Use NewWriter to create a new Writer.
+type Writer struct {
+ bw *bufio.Writer
+ cw *countWriter
+ toc *JTOC
+ diffHash hash.Hash // SHA-256 of uncompressed tar
+
+ closed bool
+ gz io.WriteCloser
+ lastUsername map[int]string
+ lastGroupname map[int]string
+ compressor Compressor
+
+ uncompressedCounter *countWriteFlusher
+
+ // ChunkSize optionally controls the maximum number of bytes
+ // of data of a regular file that can be written in one gzip
+ // stream before a new gzip stream is started.
+ // Zero means to use a default, currently 4 MiB.
+ ChunkSize int
+
+ // MinChunkSize optionally controls the minimum number of bytes
+ // of data must be written in one gzip stream before a new gzip
+ // NOTE: This adds a TOC property that stargz snapshotter < v0.13.0 doesn't understand.
+ MinChunkSize int
+
+ needsOpenGzEntries map[string]struct{}
+}
+
+// currentCompressionWriter writes to the current w.gz field, which can
+// change throughout writing a tar entry.
+//
+// Additionally, it updates w's SHA-256 of the uncompressed bytes
+// of the tar file.
+type currentCompressionWriter struct{ w *Writer }
+
+func (ccw currentCompressionWriter) Write(p []byte) (int, error) {
+ ccw.w.diffHash.Write(p)
+ if ccw.w.gz == nil {
+ if err := ccw.w.condOpenGz(); err != nil {
+ return 0, err
+ }
+ }
+ return ccw.w.gz.Write(p)
+}
+
+func (w *Writer) chunkSize() int {
+ if w.ChunkSize <= 0 {
+ return 4 << 20
+ }
+ return w.ChunkSize
+}
+
+// Unpack decompresses the given estargz blob and returns a ReadCloser of the tar blob.
+// TOC JSON and footer are removed.
+func Unpack(sr *io.SectionReader, c Decompressor) (io.ReadCloser, error) {
+ footerSize := c.FooterSize()
+ if sr.Size() < footerSize {
+ return nil, fmt.Errorf("blob is too small; %d < %d", sr.Size(), footerSize)
+ }
+ footerOffset := sr.Size() - footerSize
+ footer := make([]byte, footerSize)
+ if _, err := sr.ReadAt(footer, footerOffset); err != nil {
+ return nil, err
+ }
+ blobPayloadSize, _, _, err := c.ParseFooter(footer)
+ if err != nil {
+ return nil, fmt.Errorf("failed to parse footer: %w", err)
+ }
+ if blobPayloadSize < 0 {
+ blobPayloadSize = sr.Size()
+ }
+ return c.Reader(io.LimitReader(sr, blobPayloadSize))
+}
+
+// NewWriter returns a new stargz writer (gzip-based) writing to w.
+//
+// The writer must be closed to write its trailing table of contents.
+func NewWriter(w io.Writer) *Writer {
+ return NewWriterLevel(w, gzip.BestCompression)
+}
+
+// NewWriterLevel returns a new stargz writer (gzip-based) writing to w.
+// The compression level is configurable.
+//
+// The writer must be closed to write its trailing table of contents.
+func NewWriterLevel(w io.Writer, compressionLevel int) *Writer {
+ return NewWriterWithCompressor(w, NewGzipCompressorWithLevel(compressionLevel))
+}
+
+// NewWriterWithCompressor returns a new stargz writer writing to w.
+// The compression method is configurable.
+//
+// The writer must be closed to write its trailing table of contents.
+func NewWriterWithCompressor(w io.Writer, c Compressor) *Writer {
+ bw := bufio.NewWriter(w)
+ cw := &countWriter{w: bw}
+ return &Writer{
+ bw: bw,
+ cw: cw,
+ toc: &JTOC{Version: 1},
+ diffHash: sha256.New(),
+ compressor: c,
+ uncompressedCounter: &countWriteFlusher{},
+ }
+}
+
+// Close writes the stargz's table of contents and flushes all the
+// buffers, returning any error.
+func (w *Writer) Close() (digest.Digest, error) {
+ if w.closed {
+ return "", nil
+ }
+ defer func() { w.closed = true }()
+
+ if err := w.closeGz(); err != nil {
+ return "", err
+ }
+
+ // Write the TOC index and footer.
+ tocDigest, err := w.compressor.WriteTOCAndFooter(w.cw, w.cw.n, w.toc, w.diffHash)
+ if err != nil {
+ return "", err
+ }
+ if err := w.bw.Flush(); err != nil {
+ return "", err
+ }
+
+ return tocDigest, nil
+}
+
+func (w *Writer) closeGz() error {
+ if w.closed {
+ return errors.New("write on closed Writer")
+ }
+ if w.gz != nil {
+ if err := w.gz.Close(); err != nil {
+ return err
+ }
+ w.gz = nil
+ }
+ return nil
+}
+
+func (w *Writer) flushGz() error {
+ if w.closed {
+ return errors.New("flush on closed Writer")
+ }
+ if w.gz != nil {
+ if f, ok := w.gz.(interface {
+ Flush() error
+ }); ok {
+ return f.Flush()
+ }
+ }
+ return nil
+}
+
+// nameIfChanged returns name, unless it was the already the value of (*mp)[id],
+// in which case it returns the empty string.
+func (w *Writer) nameIfChanged(mp *map[int]string, id int, name string) string {
+ if name == "" {
+ return ""
+ }
+ if *mp == nil {
+ *mp = make(map[int]string)
+ }
+ if (*mp)[id] == name {
+ return ""
+ }
+ (*mp)[id] = name
+ return name
+}
+
+func (w *Writer) condOpenGz() (err error) {
+ if w.gz == nil {
+ w.gz, err = w.compressor.Writer(w.cw)
+ if w.gz != nil {
+ w.gz = w.uncompressedCounter.register(w.gz)
+ }
+ }
+ return
+}
+
+// AppendTar reads the tar or tar.gz file from r and appends
+// each of its contents to w.
+//
+// The input r can optionally be gzip compressed but the output will
+// always be compressed by the specified compressor.
+func (w *Writer) AppendTar(r io.Reader) error {
+ return w.appendTar(r, false)
+}
+
+// AppendTarLossLess reads the tar or tar.gz file from r and appends
+// each of its contents to w.
+//
+// The input r can optionally be gzip compressed but the output will
+// always be compressed by the specified compressor.
+//
+// The difference of this func with AppendTar is that this writes
+// the input tar stream into w without any modification (e.g. to header bytes).
+//
+// Note that if the input tar stream already contains TOC JSON, this returns
+// error because w cannot overwrite the TOC JSON to the one generated by w without
+// lossy modification. To avoid this error, if the input stream is known to be stargz/estargz,
+// you shoud decompress it and remove TOC JSON in advance.
+func (w *Writer) AppendTarLossLess(r io.Reader) error {
+ return w.appendTar(r, true)
+}
+
+func (w *Writer) appendTar(r io.Reader, lossless bool) error {
+ var src io.Reader
+ br := bufio.NewReader(r)
+ if isGzip(br) {
+ zr, _ := gzip.NewReader(br)
+ src = zr
+ } else {
+ src = io.Reader(br)
+ }
+ dst := currentCompressionWriter{w}
+ var tw *tar.Writer
+ if !lossless {
+ tw = tar.NewWriter(dst) // use tar writer only when this isn't lossless mode.
+ }
+ tr := tar.NewReader(src)
+ if lossless {
+ tr.RawAccounting = true
+ }
+ prevOffset := w.cw.n
+ var prevOffsetUncompressed int64
+ for {
+ h, err := tr.Next()
+ if err == io.EOF {
+ if lossless {
+ if remain := tr.RawBytes(); len(remain) > 0 {
+ // Collect the remaining null bytes.
+ // https://github.com/vbatts/tar-split/blob/80a436fd6164c557b131f7c59ed69bd81af69761/concept/main.go#L49-L53
+ if _, err := dst.Write(remain); err != nil {
+ return err
+ }
+ }
+ }
+ break
+ }
+ if err != nil {
+ return fmt.Errorf("error reading from source tar: tar.Reader.Next: %v", err)
+ }
+ if cleanEntryName(h.Name) == TOCTarName {
+ // It is possible for a layer to be "stargzified" twice during the
+ // distribution lifecycle. So we reserve "TOCTarName" here to avoid
+ // duplicated entries in the resulting layer.
+ if lossless {
+ // We cannot handle this in lossless way.
+ return fmt.Errorf("existing TOC JSON is not allowed; decompress layer before append")
+ }
+ continue
+ }
+
+ xattrs := make(map[string][]byte)
+ const xattrPAXRecordsPrefix = "SCHILY.xattr."
+ if h.PAXRecords != nil {
+ for k, v := range h.PAXRecords {
+ if strings.HasPrefix(k, xattrPAXRecordsPrefix) {
+ xattrs[k[len(xattrPAXRecordsPrefix):]] = []byte(v)
+ }
+ }
+ }
+ ent := &TOCEntry{
+ Name: h.Name,
+ Mode: h.Mode,
+ UID: h.Uid,
+ GID: h.Gid,
+ Uname: w.nameIfChanged(&w.lastUsername, h.Uid, h.Uname),
+ Gname: w.nameIfChanged(&w.lastGroupname, h.Gid, h.Gname),
+ ModTime3339: formatModtime(h.ModTime),
+ Xattrs: xattrs,
+ }
+ if err := w.condOpenGz(); err != nil {
+ return err
+ }
+ if tw != nil {
+ if err := tw.WriteHeader(h); err != nil {
+ return err
+ }
+ } else {
+ if _, err := dst.Write(tr.RawBytes()); err != nil {
+ return err
+ }
+ }
+ switch h.Typeflag {
+ case tar.TypeLink:
+ ent.Type = "hardlink"
+ ent.LinkName = h.Linkname
+ case tar.TypeSymlink:
+ ent.Type = "symlink"
+ ent.LinkName = h.Linkname
+ case tar.TypeDir:
+ ent.Type = "dir"
+ case tar.TypeReg:
+ ent.Type = "reg"
+ ent.Size = h.Size
+ case tar.TypeChar:
+ ent.Type = "char"
+ ent.DevMajor = int(h.Devmajor)
+ ent.DevMinor = int(h.Devminor)
+ case tar.TypeBlock:
+ ent.Type = "block"
+ ent.DevMajor = int(h.Devmajor)
+ ent.DevMinor = int(h.Devminor)
+ case tar.TypeFifo:
+ ent.Type = "fifo"
+ default:
+ return fmt.Errorf("unsupported input tar entry %q", h.Typeflag)
+ }
+
+ // We need to keep a reference to the TOC entry for regular files, so that we
+ // can fill the digest later.
+ var regFileEntry *TOCEntry
+ var payloadDigest digest.Digester
+ if h.Typeflag == tar.TypeReg {
+ regFileEntry = ent
+ payloadDigest = digest.Canonical.Digester()
+ }
+
+ if h.Typeflag == tar.TypeReg && ent.Size > 0 {
+ var written int64
+ totalSize := ent.Size // save it before we destroy ent
+ tee := io.TeeReader(tr, payloadDigest.Hash())
+ for written < totalSize {
+ chunkSize := int64(w.chunkSize())
+ remain := totalSize - written
+ if remain < chunkSize {
+ chunkSize = remain
+ } else {
+ ent.ChunkSize = chunkSize
+ }
+
+ // We flush the underlying compression writer here to correctly calculate "w.cw.n".
+ if err := w.flushGz(); err != nil {
+ return err
+ }
+ if w.needsOpenGz(ent) || w.cw.n-prevOffset >= int64(w.MinChunkSize) {
+ if err := w.closeGz(); err != nil {
+ return err
+ }
+ ent.Offset = w.cw.n
+ prevOffset = ent.Offset
+ prevOffsetUncompressed = w.uncompressedCounter.n
+ } else {
+ ent.Offset = prevOffset
+ ent.InnerOffset = w.uncompressedCounter.n - prevOffsetUncompressed
+ }
+
+ ent.ChunkOffset = written
+ chunkDigest := digest.Canonical.Digester()
+
+ if err := w.condOpenGz(); err != nil {
+ return err
+ }
+
+ teeChunk := io.TeeReader(tee, chunkDigest.Hash())
+ var out io.Writer
+ if tw != nil {
+ out = tw
+ } else {
+ out = dst
+ }
+ if _, err := io.CopyN(out, teeChunk, chunkSize); err != nil {
+ return fmt.Errorf("error copying %q: %v", h.Name, err)
+ }
+ ent.ChunkDigest = chunkDigest.Digest().String()
+ w.toc.Entries = append(w.toc.Entries, ent)
+ written += chunkSize
+ ent = &TOCEntry{
+ Name: h.Name,
+ Type: "chunk",
+ }
+ }
+ } else {
+ w.toc.Entries = append(w.toc.Entries, ent)
+ }
+ if payloadDigest != nil {
+ regFileEntry.Digest = payloadDigest.Digest().String()
+ }
+ if tw != nil {
+ if err := tw.Flush(); err != nil {
+ return err
+ }
+ }
+ }
+ remainDest := io.Discard
+ if lossless {
+ remainDest = dst // Preserve the remaining bytes in lossless mode
+ }
+ _, err := io.Copy(remainDest, src)
+ return err
+}
+
+func (w *Writer) needsOpenGz(ent *TOCEntry) bool {
+ if ent.Type != "reg" {
+ return false
+ }
+ if w.needsOpenGzEntries == nil {
+ return false
+ }
+ _, ok := w.needsOpenGzEntries[ent.Name]
+ return ok
+}
+
+// DiffID returns the SHA-256 of the uncompressed tar bytes.
+// It is only valid to call DiffID after Close.
+func (w *Writer) DiffID() string {
+ return fmt.Sprintf("sha256:%x", w.diffHash.Sum(nil))
+}
+
+func maxFooterSize(blobSize int64, decompressors ...Decompressor) (res int64) {
+ for _, d := range decompressors {
+ if s := d.FooterSize(); res < s && s <= blobSize {
+ res = s
+ }
+ }
+ return
+}
+
+func parseTOC(d Decompressor, sr *io.SectionReader, tocOff, tocSize int64, tocBytes []byte, opts openOpts) (*Reader, error) {
+ if tocOff < 0 {
+ // This means that TOC isn't contained in the blob.
+ // We pass nil reader to ParseTOC and expect that ParseTOC acquire TOC from
+ // the external location.
+ start := time.Now()
+ toc, tocDgst, err := d.ParseTOC(nil)
+ if err != nil {
+ return nil, err
+ }
+ if opts.telemetry != nil && opts.telemetry.GetTocLatency != nil {
+ opts.telemetry.GetTocLatency(start)
+ }
+ if opts.telemetry != nil && opts.telemetry.DeserializeTocLatency != nil {
+ opts.telemetry.DeserializeTocLatency(start)
+ }
+ return &Reader{
+ sr: sr,
+ toc: toc,
+ tocDigest: tocDgst,
+ decompressor: d,
+ }, nil
+ }
+ if len(tocBytes) > 0 {
+ start := time.Now()
+ toc, tocDgst, err := d.ParseTOC(bytes.NewReader(tocBytes))
+ if err == nil {
+ if opts.telemetry != nil && opts.telemetry.DeserializeTocLatency != nil {
+ opts.telemetry.DeserializeTocLatency(start)
+ }
+ return &Reader{
+ sr: sr,
+ toc: toc,
+ tocDigest: tocDgst,
+ decompressor: d,
+ }, nil
+ }
+ }
+
+ start := time.Now()
+ tocBytes = make([]byte, tocSize)
+ if _, err := sr.ReadAt(tocBytes, tocOff); err != nil {
+ return nil, fmt.Errorf("error reading %d byte TOC targz: %v", len(tocBytes), err)
+ }
+ if opts.telemetry != nil && opts.telemetry.GetTocLatency != nil {
+ opts.telemetry.GetTocLatency(start)
+ }
+ start = time.Now()
+ toc, tocDgst, err := d.ParseTOC(bytes.NewReader(tocBytes))
+ if err != nil {
+ return nil, err
+ }
+ if opts.telemetry != nil && opts.telemetry.DeserializeTocLatency != nil {
+ opts.telemetry.DeserializeTocLatency(start)
+ }
+ return &Reader{
+ sr: sr,
+ toc: toc,
+ tocDigest: tocDgst,
+ decompressor: d,
+ }, nil
+}
+
+func formatModtime(t time.Time) string {
+ if t.IsZero() || t.Unix() == 0 {
+ return ""
+ }
+ return t.UTC().Round(time.Second).Format(time.RFC3339)
+}
+
+func cleanEntryName(name string) string {
+ // Use path.Clean to consistently deal with path separators across platforms.
+ return strings.TrimPrefix(path.Clean("/"+name), "/")
+}
+
+// countWriter counts how many bytes have been written to its wrapped
+// io.Writer.
+type countWriter struct {
+ w io.Writer
+ n int64
+}
+
+func (cw *countWriter) Write(p []byte) (n int, err error) {
+ n, err = cw.w.Write(p)
+ cw.n += int64(n)
+ return
+}
+
+type countWriteFlusher struct {
+ io.WriteCloser
+ n int64
+}
+
+func (wc *countWriteFlusher) register(w io.WriteCloser) io.WriteCloser {
+ wc.WriteCloser = w
+ return wc
+}
+
+func (wc *countWriteFlusher) Write(p []byte) (n int, err error) {
+ n, err = wc.WriteCloser.Write(p)
+ wc.n += int64(n)
+ return
+}
+
+func (wc *countWriteFlusher) Flush() error {
+ if f, ok := wc.WriteCloser.(interface {
+ Flush() error
+ }); ok {
+ return f.Flush()
+ }
+ return nil
+}
+
+func (wc *countWriteFlusher) Close() error {
+ err := wc.WriteCloser.Close()
+ wc.WriteCloser = nil
+ return err
+}
+
+// isGzip reports whether br is positioned right before an upcoming gzip stream.
+// It does not consume any bytes from br.
+func isGzip(br *bufio.Reader) bool {
+ const (
+ gzipID1 = 0x1f
+ gzipID2 = 0x8b
+ gzipDeflate = 8
+ )
+ peek, _ := br.Peek(3)
+ return len(peek) >= 3 && peek[0] == gzipID1 && peek[1] == gzipID2 && peek[2] == gzipDeflate
+}
+
+func positive(n int64) int64 {
+ if n < 0 {
+ return 0
+ }
+ return n
+}
+
+type countReader struct {
+ r io.Reader
+ n int64
+}
+
+func (cr *countReader) Read(p []byte) (n int, err error) {
+ n, err = cr.r.Read(p)
+ cr.n += int64(n)
+ return
+}
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/gzip.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/gzip.go
new file mode 100644
index 000000000000..f24afe32f450
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/gzip.go
@@ -0,0 +1,237 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Copyright 2019 The Go Authors. All rights reserved.
+ Use of this source code is governed by a BSD-style
+ license that can be found in the LICENSE file.
+*/
+
+package estargz
+
+import (
+ "archive/tar"
+ "bytes"
+ "compress/gzip"
+ "encoding/binary"
+ "encoding/json"
+ "fmt"
+ "hash"
+ "io"
+ "strconv"
+
+ digest "github.com/opencontainers/go-digest"
+)
+
+type gzipCompression struct {
+ *GzipCompressor
+ *GzipDecompressor
+}
+
+func newGzipCompressionWithLevel(level int) Compression {
+ return &gzipCompression{
+ &GzipCompressor{level},
+ &GzipDecompressor{},
+ }
+}
+
+func NewGzipCompressor() *GzipCompressor {
+ return &GzipCompressor{gzip.BestCompression}
+}
+
+func NewGzipCompressorWithLevel(level int) *GzipCompressor {
+ return &GzipCompressor{level}
+}
+
+type GzipCompressor struct {
+ compressionLevel int
+}
+
+func (gc *GzipCompressor) Writer(w io.Writer) (WriteFlushCloser, error) {
+ return gzip.NewWriterLevel(w, gc.compressionLevel)
+}
+
+func (gc *GzipCompressor) WriteTOCAndFooter(w io.Writer, off int64, toc *JTOC, diffHash hash.Hash) (digest.Digest, error) {
+ tocJSON, err := json.MarshalIndent(toc, "", "\t")
+ if err != nil {
+ return "", err
+ }
+ gz, _ := gzip.NewWriterLevel(w, gc.compressionLevel)
+ gw := io.Writer(gz)
+ if diffHash != nil {
+ gw = io.MultiWriter(gz, diffHash)
+ }
+ tw := tar.NewWriter(gw)
+ if err := tw.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeReg,
+ Name: TOCTarName,
+ Size: int64(len(tocJSON)),
+ }); err != nil {
+ return "", err
+ }
+ if _, err := tw.Write(tocJSON); err != nil {
+ return "", err
+ }
+
+ if err := tw.Close(); err != nil {
+ return "", err
+ }
+ if err := gz.Close(); err != nil {
+ return "", err
+ }
+ if _, err := w.Write(gzipFooterBytes(off)); err != nil {
+ return "", err
+ }
+ return digest.FromBytes(tocJSON), nil
+}
+
+// gzipFooterBytes returns the 51 bytes footer.
+func gzipFooterBytes(tocOff int64) []byte {
+ buf := bytes.NewBuffer(make([]byte, 0, FooterSize))
+ gz, _ := gzip.NewWriterLevel(buf, gzip.NoCompression) // MUST be NoCompression to keep 51 bytes
+
+ // Extra header indicating the offset of TOCJSON
+ // https://tools.ietf.org/html/rfc1952#section-2.3.1.1
+ header := make([]byte, 4)
+ header[0], header[1] = 'S', 'G'
+ subfield := fmt.Sprintf("%016xSTARGZ", tocOff)
+ binary.LittleEndian.PutUint16(header[2:4], uint16(len(subfield))) // little-endian per RFC1952
+ gz.Header.Extra = append(header, []byte(subfield)...)
+ gz.Close()
+ if buf.Len() != FooterSize {
+ panic(fmt.Sprintf("footer buffer = %d, not %d", buf.Len(), FooterSize))
+ }
+ return buf.Bytes()
+}
+
+type GzipDecompressor struct{}
+
+func (gz *GzipDecompressor) Reader(r io.Reader) (io.ReadCloser, error) {
+ return gzip.NewReader(r)
+}
+
+func (gz *GzipDecompressor) ParseTOC(r io.Reader) (toc *JTOC, tocDgst digest.Digest, err error) {
+ return parseTOCEStargz(r)
+}
+
+func (gz *GzipDecompressor) ParseFooter(p []byte) (blobPayloadSize, tocOffset, tocSize int64, err error) {
+ if len(p) != FooterSize {
+ return 0, 0, 0, fmt.Errorf("invalid length %d cannot be parsed", len(p))
+ }
+ zr, err := gzip.NewReader(bytes.NewReader(p))
+ if err != nil {
+ return 0, 0, 0, err
+ }
+ defer zr.Close()
+ extra := zr.Header.Extra
+ si1, si2, subfieldlen, subfield := extra[0], extra[1], extra[2:4], extra[4:]
+ if si1 != 'S' || si2 != 'G' {
+ return 0, 0, 0, fmt.Errorf("invalid subfield IDs: %q, %q; want E, S", si1, si2)
+ }
+ if slen := binary.LittleEndian.Uint16(subfieldlen); slen != uint16(16+len("STARGZ")) {
+ return 0, 0, 0, fmt.Errorf("invalid length of subfield %d; want %d", slen, 16+len("STARGZ"))
+ }
+ if string(subfield[16:]) != "STARGZ" {
+ return 0, 0, 0, fmt.Errorf("STARGZ magic string must be included in the footer subfield")
+ }
+ tocOffset, err = strconv.ParseInt(string(subfield[:16]), 16, 64)
+ if err != nil {
+ return 0, 0, 0, fmt.Errorf("legacy: failed to parse toc offset: %w", err)
+ }
+ return tocOffset, tocOffset, 0, nil
+}
+
+func (gz *GzipDecompressor) FooterSize() int64 {
+ return FooterSize
+}
+
+func (gz *GzipDecompressor) DecompressTOC(r io.Reader) (tocJSON io.ReadCloser, err error) {
+ return decompressTOCEStargz(r)
+}
+
+type LegacyGzipDecompressor struct{}
+
+func (gz *LegacyGzipDecompressor) Reader(r io.Reader) (io.ReadCloser, error) {
+ return gzip.NewReader(r)
+}
+
+func (gz *LegacyGzipDecompressor) ParseTOC(r io.Reader) (toc *JTOC, tocDgst digest.Digest, err error) {
+ return parseTOCEStargz(r)
+}
+
+func (gz *LegacyGzipDecompressor) ParseFooter(p []byte) (blobPayloadSize, tocOffset, tocSize int64, err error) {
+ if len(p) != legacyFooterSize {
+ return 0, 0, 0, fmt.Errorf("legacy: invalid length %d cannot be parsed", len(p))
+ }
+ zr, err := gzip.NewReader(bytes.NewReader(p))
+ if err != nil {
+ return 0, 0, 0, fmt.Errorf("legacy: failed to get footer gzip reader: %w", err)
+ }
+ defer zr.Close()
+ extra := zr.Header.Extra
+ if len(extra) != 16+len("STARGZ") {
+ return 0, 0, 0, fmt.Errorf("legacy: invalid stargz's extra field size")
+ }
+ if string(extra[16:]) != "STARGZ" {
+ return 0, 0, 0, fmt.Errorf("legacy: magic string STARGZ not found")
+ }
+ tocOffset, err = strconv.ParseInt(string(extra[:16]), 16, 64)
+ if err != nil {
+ return 0, 0, 0, fmt.Errorf("legacy: failed to parse toc offset: %w", err)
+ }
+ return tocOffset, tocOffset, 0, nil
+}
+
+func (gz *LegacyGzipDecompressor) FooterSize() int64 {
+ return legacyFooterSize
+}
+
+func (gz *LegacyGzipDecompressor) DecompressTOC(r io.Reader) (tocJSON io.ReadCloser, err error) {
+ return decompressTOCEStargz(r)
+}
+
+func parseTOCEStargz(r io.Reader) (toc *JTOC, tocDgst digest.Digest, err error) {
+ tr, err := decompressTOCEStargz(r)
+ if err != nil {
+ return nil, "", err
+ }
+ dgstr := digest.Canonical.Digester()
+ toc = new(JTOC)
+ if err := json.NewDecoder(io.TeeReader(tr, dgstr.Hash())).Decode(&toc); err != nil {
+ return nil, "", fmt.Errorf("error decoding TOC JSON: %v", err)
+ }
+ if err := tr.Close(); err != nil {
+ return nil, "", err
+ }
+ return toc, dgstr.Digest(), nil
+}
+
+func decompressTOCEStargz(r io.Reader) (tocJSON io.ReadCloser, err error) {
+ zr, err := gzip.NewReader(r)
+ if err != nil {
+ return nil, fmt.Errorf("malformed TOC gzip header: %v", err)
+ }
+ zr.Multistream(false)
+ tr := tar.NewReader(zr)
+ h, err := tr.Next()
+ if err != nil {
+ return nil, fmt.Errorf("failed to find tar header in TOC gzip stream: %v", err)
+ }
+ if h.Name != TOCTarName {
+ return nil, fmt.Errorf("TOC tar entry had name %q; expected %q", h.Name, TOCTarName)
+ }
+ return readCloser{tr, zr.Close}, nil
+}
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/testutil.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/testutil.go
new file mode 100644
index 000000000000..0ca6fd75f2ed
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/testutil.go
@@ -0,0 +1,2366 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Copyright 2019 The Go Authors. All rights reserved.
+ Use of this source code is governed by a BSD-style
+ license that can be found in the LICENSE file.
+*/
+
+package estargz
+
+import (
+ "archive/tar"
+ "bytes"
+ "compress/gzip"
+ "crypto/sha256"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "math/rand"
+ "os"
+ "path/filepath"
+ "reflect"
+ "sort"
+ "strings"
+ "testing"
+ "time"
+
+ "github.com/containerd/stargz-snapshotter/estargz/errorutil"
+ "github.com/klauspost/compress/zstd"
+ digest "github.com/opencontainers/go-digest"
+)
+
+func init() {
+ rand.Seed(time.Now().UnixNano())
+}
+
+// TestingController is Compression with some helper methods necessary for testing.
+type TestingController interface {
+ Compression
+ TestStreams(t *testing.T, b []byte, streams []int64)
+ DiffIDOf(*testing.T, []byte) string
+ String() string
+}
+
+// CompressionTestSuite tests this pkg with controllers can build valid eStargz blobs and parse them.
+func CompressionTestSuite(t *testing.T, controllers ...TestingControllerFactory) {
+ t.Run("testBuild", func(t *testing.T) { t.Parallel(); testBuild(t, controllers...) })
+ t.Run("testDigestAndVerify", func(t *testing.T) { t.Parallel(); testDigestAndVerify(t, controllers...) })
+ t.Run("testWriteAndOpen", func(t *testing.T) { t.Parallel(); testWriteAndOpen(t, controllers...) })
+}
+
+type TestingControllerFactory func() TestingController
+
+const (
+ uncompressedType int = iota
+ gzipType
+ zstdType
+)
+
+var srcCompressions = []int{
+ uncompressedType,
+ gzipType,
+ zstdType,
+}
+
+var allowedPrefix = [4]string{"", "./", "/", "../"}
+
+// testBuild tests the resulting stargz blob built by this pkg has the same
+// contents as the normal stargz blob.
+func testBuild(t *testing.T, controllers ...TestingControllerFactory) {
+ tests := []struct {
+ name string
+ chunkSize int
+ minChunkSize []int
+ in []tarEntry
+ }{
+ {
+ name: "regfiles and directories",
+ chunkSize: 4,
+ in: tarOf(
+ file("foo", "test1"),
+ dir("foo2/"),
+ file("foo2/bar", "test2", xAttr(map[string]string{"test": "sample"})),
+ ),
+ },
+ {
+ name: "empty files",
+ chunkSize: 4,
+ in: tarOf(
+ file("foo", "tttttt"),
+ file("foo_empty", ""),
+ file("foo2", "tttttt"),
+ file("foo_empty2", ""),
+ file("foo3", "tttttt"),
+ file("foo_empty3", ""),
+ file("foo4", "tttttt"),
+ file("foo_empty4", ""),
+ file("foo5", "tttttt"),
+ file("foo_empty5", ""),
+ file("foo6", "tttttt"),
+ ),
+ },
+ {
+ name: "various files",
+ chunkSize: 4,
+ minChunkSize: []int{0, 64000},
+ in: tarOf(
+ file("baz.txt", "bazbazbazbazbazbazbaz"),
+ file("foo1.txt", "a"),
+ file("bar/foo2.txt", "b"),
+ file("foo3.txt", "c"),
+ symlink("barlink", "test/bar.txt"),
+ dir("test/"),
+ dir("dev/"),
+ blockdev("dev/testblock", 3, 4),
+ fifo("dev/testfifo"),
+ chardev("dev/testchar1", 5, 6),
+ file("test/bar.txt", "testbartestbar", xAttr(map[string]string{"test2": "sample2"})),
+ dir("test2/"),
+ link("test2/bazlink", "baz.txt"),
+ chardev("dev/testchar2", 1, 2),
+ ),
+ },
+ {
+ name: "no contents",
+ chunkSize: 4,
+ in: tarOf(
+ file("baz.txt", ""),
+ symlink("barlink", "test/bar.txt"),
+ dir("test/"),
+ dir("dev/"),
+ blockdev("dev/testblock", 3, 4),
+ fifo("dev/testfifo"),
+ chardev("dev/testchar1", 5, 6),
+ file("test/bar.txt", "", xAttr(map[string]string{"test2": "sample2"})),
+ dir("test2/"),
+ link("test2/bazlink", "baz.txt"),
+ chardev("dev/testchar2", 1, 2),
+ ),
+ },
+ }
+ for _, tt := range tests {
+ if len(tt.minChunkSize) == 0 {
+ tt.minChunkSize = []int{0}
+ }
+ for _, srcCompression := range srcCompressions {
+ srcCompression := srcCompression
+ for _, newCL := range controllers {
+ newCL := newCL
+ for _, srcTarFormat := range []tar.Format{tar.FormatUSTAR, tar.FormatPAX, tar.FormatGNU} {
+ srcTarFormat := srcTarFormat
+ for _, prefix := range allowedPrefix {
+ prefix := prefix
+ for _, minChunkSize := range tt.minChunkSize {
+ minChunkSize := minChunkSize
+ t.Run(tt.name+"-"+fmt.Sprintf("compression=%v,prefix=%q,src=%d,format=%s,minChunkSize=%d", newCL(), prefix, srcCompression, srcTarFormat, minChunkSize), func(t *testing.T) {
+ tarBlob := buildTar(t, tt.in, prefix, srcTarFormat)
+ // Test divideEntries()
+ entries, err := sortEntries(tarBlob, nil, nil) // identical order
+ if err != nil {
+ t.Fatalf("failed to parse tar: %v", err)
+ }
+ var merged []*entry
+ for _, part := range divideEntries(entries, 4) {
+ merged = append(merged, part...)
+ }
+ if !reflect.DeepEqual(entries, merged) {
+ for _, e := range entries {
+ t.Logf("Original: %v", e.header)
+ }
+ for _, e := range merged {
+ t.Logf("Merged: %v", e.header)
+ }
+ t.Errorf("divided entries couldn't be merged")
+ return
+ }
+
+ // Prepare sample data
+ cl1 := newCL()
+ wantBuf := new(bytes.Buffer)
+ sw := NewWriterWithCompressor(wantBuf, cl1)
+ sw.MinChunkSize = minChunkSize
+ sw.ChunkSize = tt.chunkSize
+ if err := sw.AppendTar(tarBlob); err != nil {
+ t.Fatalf("failed to append tar to want stargz: %v", err)
+ }
+ if _, err := sw.Close(); err != nil {
+ t.Fatalf("failed to prepare want stargz: %v", err)
+ }
+ wantData := wantBuf.Bytes()
+ want, err := Open(io.NewSectionReader(
+ bytes.NewReader(wantData), 0, int64(len(wantData))),
+ WithDecompressors(cl1),
+ )
+ if err != nil {
+ t.Fatalf("failed to parse the want stargz: %v", err)
+ }
+
+ // Prepare testing data
+ var opts []Option
+ if minChunkSize > 0 {
+ opts = append(opts, WithMinChunkSize(minChunkSize))
+ }
+ cl2 := newCL()
+ rc, err := Build(compressBlob(t, tarBlob, srcCompression),
+ append(opts, WithChunkSize(tt.chunkSize), WithCompression(cl2))...)
+ if err != nil {
+ t.Fatalf("failed to build stargz: %v", err)
+ }
+ defer rc.Close()
+ gotBuf := new(bytes.Buffer)
+ if _, err := io.Copy(gotBuf, rc); err != nil {
+ t.Fatalf("failed to copy built stargz blob: %v", err)
+ }
+ gotData := gotBuf.Bytes()
+ got, err := Open(io.NewSectionReader(
+ bytes.NewReader(gotBuf.Bytes()), 0, int64(len(gotData))),
+ WithDecompressors(cl2),
+ )
+ if err != nil {
+ t.Fatalf("failed to parse the got stargz: %v", err)
+ }
+
+ // Check DiffID is properly calculated
+ rc.Close()
+ diffID := rc.DiffID()
+ wantDiffID := cl2.DiffIDOf(t, gotData)
+ if diffID.String() != wantDiffID {
+ t.Errorf("DiffID = %q; want %q", diffID, wantDiffID)
+ }
+
+ // Compare as stargz
+ if !isSameVersion(t, cl1, wantData, cl2, gotData) {
+ t.Errorf("built stargz hasn't same json")
+ return
+ }
+ if !isSameEntries(t, want, got) {
+ t.Errorf("built stargz isn't same as the original")
+ return
+ }
+
+ // Compare as tar.gz
+ if !isSameTarGz(t, cl1, wantData, cl2, gotData) {
+ t.Errorf("built stargz isn't same tar.gz")
+ return
+ }
+ })
+ }
+ }
+ }
+ }
+ }
+ }
+}
+
+func isSameTarGz(t *testing.T, cla TestingController, a []byte, clb TestingController, b []byte) bool {
+ aGz, err := cla.Reader(bytes.NewReader(a))
+ if err != nil {
+ t.Fatalf("failed to read A")
+ }
+ defer aGz.Close()
+ bGz, err := clb.Reader(bytes.NewReader(b))
+ if err != nil {
+ t.Fatalf("failed to read B")
+ }
+ defer bGz.Close()
+
+ // Same as tar's Next() method but ignores landmarks and TOCJSON file
+ next := func(r *tar.Reader) (h *tar.Header, err error) {
+ for {
+ if h, err = r.Next(); err != nil {
+ return
+ }
+ if h.Name != PrefetchLandmark &&
+ h.Name != NoPrefetchLandmark &&
+ h.Name != TOCTarName {
+ return
+ }
+ }
+ }
+
+ aTar := tar.NewReader(aGz)
+ bTar := tar.NewReader(bGz)
+ for {
+ // Fetch and parse next header.
+ aH, aErr := next(aTar)
+ bH, bErr := next(bTar)
+ if aErr != nil || bErr != nil {
+ if aErr == io.EOF && bErr == io.EOF {
+ break
+ }
+ t.Fatalf("Failed to parse tar file: A: %v, B: %v", aErr, bErr)
+ }
+ if !reflect.DeepEqual(aH, bH) {
+ t.Logf("different header (A = %v; B = %v)", aH, bH)
+ return false
+
+ }
+ aFile, err := io.ReadAll(aTar)
+ if err != nil {
+ t.Fatal("failed to read tar payload of A")
+ }
+ bFile, err := io.ReadAll(bTar)
+ if err != nil {
+ t.Fatal("failed to read tar payload of B")
+ }
+ if !bytes.Equal(aFile, bFile) {
+ t.Logf("different tar payload (A = %q; B = %q)", string(a), string(b))
+ return false
+ }
+ }
+
+ return true
+}
+
+func isSameVersion(t *testing.T, cla TestingController, a []byte, clb TestingController, b []byte) bool {
+ aJTOC, _, err := parseStargz(io.NewSectionReader(bytes.NewReader(a), 0, int64(len(a))), cla)
+ if err != nil {
+ t.Fatalf("failed to parse A: %v", err)
+ }
+ bJTOC, _, err := parseStargz(io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b))), clb)
+ if err != nil {
+ t.Fatalf("failed to parse B: %v", err)
+ }
+ t.Logf("A: TOCJSON: %v", dumpTOCJSON(t, aJTOC))
+ t.Logf("B: TOCJSON: %v", dumpTOCJSON(t, bJTOC))
+ return aJTOC.Version == bJTOC.Version
+}
+
+func isSameEntries(t *testing.T, a, b *Reader) bool {
+ aroot, ok := a.Lookup("")
+ if !ok {
+ t.Fatalf("failed to get root of A")
+ }
+ broot, ok := b.Lookup("")
+ if !ok {
+ t.Fatalf("failed to get root of B")
+ }
+ aEntry := stargzEntry{aroot, a}
+ bEntry := stargzEntry{broot, b}
+ return contains(t, aEntry, bEntry) && contains(t, bEntry, aEntry)
+}
+
+func compressBlob(t *testing.T, src *io.SectionReader, srcCompression int) *io.SectionReader {
+ buf := new(bytes.Buffer)
+ var w io.WriteCloser
+ var err error
+ if srcCompression == gzipType {
+ w = gzip.NewWriter(buf)
+ } else if srcCompression == zstdType {
+ w, err = zstd.NewWriter(buf)
+ if err != nil {
+ t.Fatalf("failed to init zstd writer: %v", err)
+ }
+ } else {
+ return src
+ }
+ src.Seek(0, io.SeekStart)
+ if _, err := io.Copy(w, src); err != nil {
+ t.Fatalf("failed to compress source")
+ }
+ if err := w.Close(); err != nil {
+ t.Fatalf("failed to finalize compress source")
+ }
+ data := buf.Bytes()
+ return io.NewSectionReader(bytes.NewReader(data), 0, int64(len(data)))
+
+}
+
+type stargzEntry struct {
+ e *TOCEntry
+ r *Reader
+}
+
+// contains checks if all child entries in "b" are also contained in "a".
+// This function also checks if the files/chunks contain the same contents among "a" and "b".
+func contains(t *testing.T, a, b stargzEntry) bool {
+ ae, ar := a.e, a.r
+ be, br := b.e, b.r
+ t.Logf("Comparing: %q vs %q", ae.Name, be.Name)
+ if !equalEntry(ae, be) {
+ t.Logf("%q != %q: entry: a: %v, b: %v", ae.Name, be.Name, ae, be)
+ return false
+ }
+ if ae.Type == "dir" {
+ t.Logf("Directory: %q vs %q: %v vs %v", ae.Name, be.Name,
+ allChildrenName(ae), allChildrenName(be))
+ iscontain := true
+ ae.ForeachChild(func(aBaseName string, aChild *TOCEntry) bool {
+ // Walk through all files on this stargz file.
+
+ if aChild.Name == PrefetchLandmark ||
+ aChild.Name == NoPrefetchLandmark {
+ return true // Ignore landmarks
+ }
+
+ // Ignore a TOCEntry of "./" (formated as "" by stargz lib) on root directory
+ // because this points to the root directory itself.
+ if aChild.Name == "" && ae.Name == "" {
+ return true
+ }
+
+ bChild, ok := be.LookupChild(aBaseName)
+ if !ok {
+ t.Logf("%q (base: %q): not found in b: %v",
+ ae.Name, aBaseName, allChildrenName(be))
+ iscontain = false
+ return false
+ }
+
+ childcontain := contains(t, stargzEntry{aChild, a.r}, stargzEntry{bChild, b.r})
+ if !childcontain {
+ t.Logf("%q != %q: non-equal dir", ae.Name, be.Name)
+ iscontain = false
+ return false
+ }
+ return true
+ })
+ return iscontain
+ } else if ae.Type == "reg" {
+ af, err := ar.OpenFile(ae.Name)
+ if err != nil {
+ t.Fatalf("failed to open file %q on A: %v", ae.Name, err)
+ }
+ bf, err := br.OpenFile(be.Name)
+ if err != nil {
+ t.Fatalf("failed to open file %q on B: %v", be.Name, err)
+ }
+
+ var nr int64
+ for nr < ae.Size {
+ abytes, anext, aok := readOffset(t, af, nr, a)
+ bbytes, bnext, bok := readOffset(t, bf, nr, b)
+ if !aok && !bok {
+ break
+ } else if !(aok && bok) || anext != bnext {
+ t.Logf("%q != %q (offset=%d): chunk existence a=%v vs b=%v, anext=%v vs bnext=%v",
+ ae.Name, be.Name, nr, aok, bok, anext, bnext)
+ return false
+ }
+ nr = anext
+ if !bytes.Equal(abytes, bbytes) {
+ t.Logf("%q != %q: different contents %v vs %v",
+ ae.Name, be.Name, string(abytes), string(bbytes))
+ return false
+ }
+ }
+ return true
+ }
+
+ return true
+}
+
+func allChildrenName(e *TOCEntry) (children []string) {
+ e.ForeachChild(func(baseName string, _ *TOCEntry) bool {
+ children = append(children, baseName)
+ return true
+ })
+ return
+}
+
+func equalEntry(a, b *TOCEntry) bool {
+ // Here, we selectively compare fileds that we are interested in.
+ return a.Name == b.Name &&
+ a.Type == b.Type &&
+ a.Size == b.Size &&
+ a.ModTime3339 == b.ModTime3339 &&
+ a.Stat().ModTime().Equal(b.Stat().ModTime()) && // modTime time.Time
+ a.LinkName == b.LinkName &&
+ a.Mode == b.Mode &&
+ a.UID == b.UID &&
+ a.GID == b.GID &&
+ a.Uname == b.Uname &&
+ a.Gname == b.Gname &&
+ (a.Offset >= 0) == (b.Offset >= 0) &&
+ (a.NextOffset() > 0) == (b.NextOffset() > 0) &&
+ a.DevMajor == b.DevMajor &&
+ a.DevMinor == b.DevMinor &&
+ a.NumLink == b.NumLink &&
+ reflect.DeepEqual(a.Xattrs, b.Xattrs) &&
+ // chunk-related infomations aren't compared in this function.
+ // ChunkOffset int64 `json:"chunkOffset,omitempty"`
+ // ChunkSize int64 `json:"chunkSize,omitempty"`
+ // children map[string]*TOCEntry
+ a.Digest == b.Digest
+}
+
+func readOffset(t *testing.T, r *io.SectionReader, offset int64, e stargzEntry) ([]byte, int64, bool) {
+ ce, ok := e.r.ChunkEntryForOffset(e.e.Name, offset)
+ if !ok {
+ return nil, 0, false
+ }
+ data := make([]byte, ce.ChunkSize)
+ t.Logf("Offset: %v, NextOffset: %v", ce.Offset, ce.NextOffset())
+ n, err := r.ReadAt(data, ce.ChunkOffset)
+ if err != nil {
+ t.Fatalf("failed to read file payload of %q (offset:%d,size:%d): %v",
+ e.e.Name, ce.ChunkOffset, ce.ChunkSize, err)
+ }
+ if int64(n) != ce.ChunkSize {
+ t.Fatalf("unexpected copied data size %d; want %d",
+ n, ce.ChunkSize)
+ }
+ return data[:n], offset + ce.ChunkSize, true
+}
+
+func dumpTOCJSON(t *testing.T, tocJSON *JTOC) string {
+ jtocData, err := json.Marshal(*tocJSON)
+ if err != nil {
+ t.Fatalf("failed to marshal TOC JSON: %v", err)
+ }
+ buf := new(bytes.Buffer)
+ if _, err := io.Copy(buf, bytes.NewReader(jtocData)); err != nil {
+ t.Fatalf("failed to read toc json blob: %v", err)
+ }
+ return buf.String()
+}
+
+const chunkSize = 3
+
+// type check func(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, compressionLevel int)
+type check func(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory)
+
+// testDigestAndVerify runs specified checks against sample stargz blobs.
+func testDigestAndVerify(t *testing.T, controllers ...TestingControllerFactory) {
+ tests := []struct {
+ name string
+ tarInit func(t *testing.T, dgstMap map[string]digest.Digest) (blob []tarEntry)
+ checks []check
+ minChunkSize []int
+ }{
+ {
+ name: "no-regfile",
+ tarInit: func(t *testing.T, dgstMap map[string]digest.Digest) (blob []tarEntry) {
+ return tarOf(
+ dir("test/"),
+ )
+ },
+ checks: []check{
+ checkStargzTOC,
+ checkVerifyTOC,
+ checkVerifyInvalidStargzFail(buildTar(t, tarOf(
+ dir("test2/"), // modified
+ ), allowedPrefix[0])),
+ },
+ },
+ {
+ name: "small-files",
+ tarInit: func(t *testing.T, dgstMap map[string]digest.Digest) (blob []tarEntry) {
+ return tarOf(
+ regDigest(t, "baz.txt", "", dgstMap),
+ regDigest(t, "foo.txt", "a", dgstMap),
+ dir("test/"),
+ regDigest(t, "test/bar.txt", "bbb", dgstMap),
+ )
+ },
+ minChunkSize: []int{0, 64000},
+ checks: []check{
+ checkStargzTOC,
+ checkVerifyTOC,
+ checkVerifyInvalidStargzFail(buildTar(t, tarOf(
+ file("baz.txt", ""),
+ file("foo.txt", "M"), // modified
+ dir("test/"),
+ file("test/bar.txt", "bbb"),
+ ), allowedPrefix[0])),
+ // checkVerifyInvalidTOCEntryFail("foo.txt"), // TODO
+ checkVerifyBrokenContentFail("foo.txt"),
+ },
+ },
+ {
+ name: "big-files",
+ tarInit: func(t *testing.T, dgstMap map[string]digest.Digest) (blob []tarEntry) {
+ return tarOf(
+ regDigest(t, "baz.txt", "bazbazbazbazbazbazbaz", dgstMap),
+ regDigest(t, "foo.txt", "a", dgstMap),
+ dir("test/"),
+ regDigest(t, "test/bar.txt", "testbartestbar", dgstMap),
+ )
+ },
+ checks: []check{
+ checkStargzTOC,
+ checkVerifyTOC,
+ checkVerifyInvalidStargzFail(buildTar(t, tarOf(
+ file("baz.txt", "bazbazbazMMMbazbazbaz"), // modified
+ file("foo.txt", "a"),
+ dir("test/"),
+ file("test/bar.txt", "testbartestbar"),
+ ), allowedPrefix[0])),
+ checkVerifyInvalidTOCEntryFail("test/bar.txt"),
+ checkVerifyBrokenContentFail("test/bar.txt"),
+ },
+ },
+ {
+ name: "with-non-regfiles",
+ minChunkSize: []int{0, 64000},
+ tarInit: func(t *testing.T, dgstMap map[string]digest.Digest) (blob []tarEntry) {
+ return tarOf(
+ regDigest(t, "baz.txt", "bazbazbazbazbazbazbaz", dgstMap),
+ regDigest(t, "foo.txt", "a", dgstMap),
+ regDigest(t, "bar/foo2.txt", "b", dgstMap),
+ regDigest(t, "foo3.txt", "c", dgstMap),
+ symlink("barlink", "test/bar.txt"),
+ dir("test/"),
+ regDigest(t, "test/bar.txt", "testbartestbar", dgstMap),
+ dir("test2/"),
+ link("test2/bazlink", "baz.txt"),
+ )
+ },
+ checks: []check{
+ checkStargzTOC,
+ checkVerifyTOC,
+ checkVerifyInvalidStargzFail(buildTar(t, tarOf(
+ file("baz.txt", "bazbazbazbazbazbazbaz"),
+ file("foo.txt", "a"),
+ file("bar/foo2.txt", "b"),
+ file("foo3.txt", "c"),
+ symlink("barlink", "test/bar.txt"),
+ dir("test/"),
+ file("test/bar.txt", "testbartestbar"),
+ dir("test2/"),
+ link("test2/bazlink", "foo.txt"), // modified
+ ), allowedPrefix[0])),
+ checkVerifyInvalidTOCEntryFail("test/bar.txt"),
+ checkVerifyBrokenContentFail("test/bar.txt"),
+ },
+ },
+ }
+
+ for _, tt := range tests {
+ if len(tt.minChunkSize) == 0 {
+ tt.minChunkSize = []int{0}
+ }
+ for _, srcCompression := range srcCompressions {
+ srcCompression := srcCompression
+ for _, newCL := range controllers {
+ newCL := newCL
+ for _, prefix := range allowedPrefix {
+ prefix := prefix
+ for _, srcTarFormat := range []tar.Format{tar.FormatUSTAR, tar.FormatPAX, tar.FormatGNU} {
+ srcTarFormat := srcTarFormat
+ for _, minChunkSize := range tt.minChunkSize {
+ minChunkSize := minChunkSize
+ t.Run(tt.name+"-"+fmt.Sprintf("compression=%v,prefix=%q,format=%s,minChunkSize=%d", newCL(), prefix, srcTarFormat, minChunkSize), func(t *testing.T) {
+ // Get original tar file and chunk digests
+ dgstMap := make(map[string]digest.Digest)
+ tarBlob := buildTar(t, tt.tarInit(t, dgstMap), prefix, srcTarFormat)
+
+ cl := newCL()
+ rc, err := Build(compressBlob(t, tarBlob, srcCompression),
+ WithChunkSize(chunkSize), WithCompression(cl))
+ if err != nil {
+ t.Fatalf("failed to convert stargz: %v", err)
+ }
+ tocDigest := rc.TOCDigest()
+ defer rc.Close()
+ buf := new(bytes.Buffer)
+ if _, err := io.Copy(buf, rc); err != nil {
+ t.Fatalf("failed to copy built stargz blob: %v", err)
+ }
+ newStargz := buf.Bytes()
+ // NoPrefetchLandmark is added during `Bulid`, which is expected behaviour.
+ dgstMap[chunkID(NoPrefetchLandmark, 0, int64(len([]byte{landmarkContents})))] = digest.FromBytes([]byte{landmarkContents})
+
+ for _, check := range tt.checks {
+ check(t, newStargz, tocDigest, dgstMap, cl, newCL)
+ }
+ })
+ }
+ }
+ }
+ }
+ }
+ }
+}
+
+// checkStargzTOC checks the TOC JSON of the passed stargz has the expected
+// digest and contains valid chunks. It walks all entries in the stargz and
+// checks all chunk digests stored to the TOC JSON match the actual contents.
+func checkStargzTOC(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory) {
+ sgz, err := Open(
+ io.NewSectionReader(bytes.NewReader(sgzData), 0, int64(len(sgzData))),
+ WithDecompressors(controller),
+ )
+ if err != nil {
+ t.Errorf("failed to parse converted stargz: %v", err)
+ return
+ }
+ digestMapTOC, err := listDigests(io.NewSectionReader(
+ bytes.NewReader(sgzData), 0, int64(len(sgzData))),
+ controller,
+ )
+ if err != nil {
+ t.Fatalf("failed to list digest: %v", err)
+ }
+ found := make(map[string]bool)
+ for id := range dgstMap {
+ found[id] = false
+ }
+ zr, err := controller.Reader(bytes.NewReader(sgzData))
+ if err != nil {
+ t.Fatalf("failed to decompress converted stargz: %v", err)
+ }
+ defer zr.Close()
+ tr := tar.NewReader(zr)
+ for {
+ h, err := tr.Next()
+ if err != nil {
+ if err != io.EOF {
+ t.Errorf("failed to read tar entry: %v", err)
+ return
+ }
+ break
+ }
+ if h.Name == TOCTarName {
+ // Check the digest of TOC JSON based on the actual contents
+ // It's sure that TOC JSON exists in this archive because
+ // Open succeeded.
+ dgstr := digest.Canonical.Digester()
+ if _, err := io.Copy(dgstr.Hash(), tr); err != nil {
+ t.Fatalf("failed to calculate digest of TOC JSON: %v",
+ err)
+ }
+ if dgstr.Digest() != tocDigest {
+ t.Errorf("invalid TOC JSON %q; want %q", tocDigest, dgstr.Digest())
+ }
+ continue
+ }
+ if _, ok := sgz.Lookup(h.Name); !ok {
+ t.Errorf("lost stargz entry %q in the converted TOC", h.Name)
+ return
+ }
+ var n int64
+ for n < h.Size {
+ ce, ok := sgz.ChunkEntryForOffset(h.Name, n)
+ if !ok {
+ t.Errorf("lost chunk %q(offset=%d) in the converted TOC",
+ h.Name, n)
+ return
+ }
+
+ // Get the original digest to make sure the file contents are kept unchanged
+ // from the original tar, during the whole conversion steps.
+ id := chunkID(h.Name, n, ce.ChunkSize)
+ want, ok := dgstMap[id]
+ if !ok {
+ t.Errorf("Unexpected chunk %q(offset=%d,size=%d): %v",
+ h.Name, n, ce.ChunkSize, dgstMap)
+ return
+ }
+ found[id] = true
+
+ // Check the file contents
+ dgstr := digest.Canonical.Digester()
+ if _, err := io.CopyN(dgstr.Hash(), tr, ce.ChunkSize); err != nil {
+ t.Fatalf("failed to calculate digest of %q (offset=%d,size=%d)",
+ h.Name, n, ce.ChunkSize)
+ }
+ if want != dgstr.Digest() {
+ t.Errorf("Invalid contents in converted stargz %q: %q; want %q",
+ h.Name, dgstr.Digest(), want)
+ return
+ }
+
+ // Check the digest stored in TOC JSON
+ dgstTOC, ok := digestMapTOC[ce.Offset]
+ if !ok {
+ t.Errorf("digest of %q(offset=%d,size=%d,chunkOffset=%d) isn't registered",
+ h.Name, ce.Offset, ce.ChunkSize, ce.ChunkOffset)
+ }
+ if want != dgstTOC {
+ t.Errorf("Invalid digest in TOCEntry %q: %q; want %q",
+ h.Name, dgstTOC, want)
+ return
+ }
+
+ n += ce.ChunkSize
+ }
+ }
+
+ for id, ok := range found {
+ if !ok {
+ t.Errorf("required chunk %q not found in the converted stargz: %v", id, found)
+ }
+ }
+}
+
+// checkVerifyTOC checks the verification works for the TOC JSON of the passed
+// stargz. It walks all entries in the stargz and checks the verifications for
+// all chunks work.
+func checkVerifyTOC(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory) {
+ sgz, err := Open(
+ io.NewSectionReader(bytes.NewReader(sgzData), 0, int64(len(sgzData))),
+ WithDecompressors(controller),
+ )
+ if err != nil {
+ t.Errorf("failed to parse converted stargz: %v", err)
+ return
+ }
+ ev, err := sgz.VerifyTOC(tocDigest)
+ if err != nil {
+ t.Errorf("failed to verify stargz: %v", err)
+ return
+ }
+
+ found := make(map[string]bool)
+ for id := range dgstMap {
+ found[id] = false
+ }
+ zr, err := controller.Reader(bytes.NewReader(sgzData))
+ if err != nil {
+ t.Fatalf("failed to decompress converted stargz: %v", err)
+ }
+ defer zr.Close()
+ tr := tar.NewReader(zr)
+ for {
+ h, err := tr.Next()
+ if err != nil {
+ if err != io.EOF {
+ t.Errorf("failed to read tar entry: %v", err)
+ return
+ }
+ break
+ }
+ if h.Name == TOCTarName {
+ continue
+ }
+ if _, ok := sgz.Lookup(h.Name); !ok {
+ t.Errorf("lost stargz entry %q in the converted TOC", h.Name)
+ return
+ }
+ var n int64
+ for n < h.Size {
+ ce, ok := sgz.ChunkEntryForOffset(h.Name, n)
+ if !ok {
+ t.Errorf("lost chunk %q(offset=%d) in the converted TOC",
+ h.Name, n)
+ return
+ }
+
+ v, err := ev.Verifier(ce)
+ if err != nil {
+ t.Errorf("failed to get verifier for %q(offset=%d)", h.Name, n)
+ }
+
+ found[chunkID(h.Name, n, ce.ChunkSize)] = true
+
+ // Check the file contents
+ if _, err := io.CopyN(v, tr, ce.ChunkSize); err != nil {
+ t.Fatalf("failed to get chunk of %q (offset=%d,size=%d)",
+ h.Name, n, ce.ChunkSize)
+ }
+ if !v.Verified() {
+ t.Errorf("Invalid contents in converted stargz %q (should be succeeded)",
+ h.Name)
+ return
+ }
+ n += ce.ChunkSize
+ }
+ }
+
+ for id, ok := range found {
+ if !ok {
+ t.Errorf("required chunk %q not found in the converted stargz: %v", id, found)
+ }
+ }
+}
+
+// checkVerifyInvalidTOCEntryFail checks if misconfigured TOC JSON can be
+// detected during the verification and the verification returns an error.
+func checkVerifyInvalidTOCEntryFail(filename string) check {
+ return func(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory) {
+ funcs := map[string]rewriteFunc{
+ "lost digest in a entry": func(t *testing.T, toc *JTOC, sgz *io.SectionReader) {
+ var found bool
+ for _, e := range toc.Entries {
+ if cleanEntryName(e.Name) == filename {
+ if e.Type != "reg" && e.Type != "chunk" {
+ t.Fatalf("entry %q to break must be regfile or chunk", filename)
+ }
+ if e.ChunkDigest == "" {
+ t.Fatalf("entry %q is already invalid", filename)
+ }
+ e.ChunkDigest = ""
+ found = true
+ }
+ }
+ if !found {
+ t.Fatalf("rewrite target not found")
+ }
+ },
+ "duplicated entry offset": func(t *testing.T, toc *JTOC, sgz *io.SectionReader) {
+ var (
+ sampleEntry *TOCEntry
+ targetEntry *TOCEntry
+ )
+ for _, e := range toc.Entries {
+ if e.Type == "reg" || e.Type == "chunk" {
+ if cleanEntryName(e.Name) == filename {
+ targetEntry = e
+ } else {
+ sampleEntry = e
+ }
+ }
+ }
+ if sampleEntry == nil {
+ t.Fatalf("TOC must contain at least one regfile or chunk entry other than the rewrite target")
+ }
+ if targetEntry == nil {
+ t.Fatalf("rewrite target not found")
+ }
+ targetEntry.Offset = sampleEntry.Offset
+ },
+ }
+
+ for name, rFunc := range funcs {
+ t.Run(name, func(t *testing.T) {
+ newSgz, newTocDigest := rewriteTOCJSON(t, io.NewSectionReader(bytes.NewReader(sgzData), 0, int64(len(sgzData))), rFunc, controller)
+ buf := new(bytes.Buffer)
+ if _, err := io.Copy(buf, newSgz); err != nil {
+ t.Fatalf("failed to get converted stargz")
+ }
+ isgz := buf.Bytes()
+
+ sgz, err := Open(
+ io.NewSectionReader(bytes.NewReader(isgz), 0, int64(len(isgz))),
+ WithDecompressors(controller),
+ )
+ if err != nil {
+ t.Fatalf("failed to parse converted stargz: %v", err)
+ return
+ }
+ _, err = sgz.VerifyTOC(newTocDigest)
+ if err == nil {
+ t.Errorf("must fail for invalid TOC")
+ return
+ }
+ })
+ }
+ }
+}
+
+// checkVerifyInvalidStargzFail checks if the verification detects that the
+// given stargz file doesn't match to the expected digest and returns error.
+func checkVerifyInvalidStargzFail(invalid *io.SectionReader) check {
+ return func(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory) {
+ cl := newController()
+ rc, err := Build(invalid, WithChunkSize(chunkSize), WithCompression(cl))
+ if err != nil {
+ t.Fatalf("failed to convert stargz: %v", err)
+ }
+ defer rc.Close()
+ buf := new(bytes.Buffer)
+ if _, err := io.Copy(buf, rc); err != nil {
+ t.Fatalf("failed to copy built stargz blob: %v", err)
+ }
+ mStargz := buf.Bytes()
+
+ sgz, err := Open(
+ io.NewSectionReader(bytes.NewReader(mStargz), 0, int64(len(mStargz))),
+ WithDecompressors(cl),
+ )
+ if err != nil {
+ t.Fatalf("failed to parse converted stargz: %v", err)
+ return
+ }
+ _, err = sgz.VerifyTOC(tocDigest)
+ if err == nil {
+ t.Errorf("must fail for invalid TOC")
+ return
+ }
+ }
+}
+
+// checkVerifyBrokenContentFail checks if the verifier detects broken contents
+// that doesn't match to the expected digest and returns error.
+func checkVerifyBrokenContentFail(filename string) check {
+ return func(t *testing.T, sgzData []byte, tocDigest digest.Digest, dgstMap map[string]digest.Digest, controller TestingController, newController TestingControllerFactory) {
+ // Parse stargz file
+ sgz, err := Open(
+ io.NewSectionReader(bytes.NewReader(sgzData), 0, int64(len(sgzData))),
+ WithDecompressors(controller),
+ )
+ if err != nil {
+ t.Fatalf("failed to parse converted stargz: %v", err)
+ return
+ }
+ ev, err := sgz.VerifyTOC(tocDigest)
+ if err != nil {
+ t.Fatalf("failed to verify stargz: %v", err)
+ return
+ }
+
+ // Open the target file
+ sr, err := sgz.OpenFile(filename)
+ if err != nil {
+ t.Fatalf("failed to open file %q", filename)
+ }
+ ce, ok := sgz.ChunkEntryForOffset(filename, 0)
+ if !ok {
+ t.Fatalf("lost chunk %q(offset=%d) in the converted TOC", filename, 0)
+ return
+ }
+ if ce.ChunkSize == 0 {
+ t.Fatalf("file mustn't be empty")
+ return
+ }
+ data := make([]byte, ce.ChunkSize)
+ if _, err := sr.ReadAt(data, ce.ChunkOffset); err != nil {
+ t.Errorf("failed to get data of a chunk of %q(offset=%q)",
+ filename, ce.ChunkOffset)
+ }
+
+ // Check the broken chunk (must fail)
+ v, err := ev.Verifier(ce)
+ if err != nil {
+ t.Fatalf("failed to get verifier for %q", filename)
+ }
+ broken := append([]byte{^data[0]}, data[1:]...)
+ if _, err := io.CopyN(v, bytes.NewReader(broken), ce.ChunkSize); err != nil {
+ t.Fatalf("failed to get chunk of %q (offset=%d,size=%d)",
+ filename, ce.ChunkOffset, ce.ChunkSize)
+ }
+ if v.Verified() {
+ t.Errorf("verification must fail for broken file chunk %q(org:%q,broken:%q)",
+ filename, data, broken)
+ }
+ }
+}
+
+func chunkID(name string, offset, size int64) string {
+ return fmt.Sprintf("%s-%d-%d", cleanEntryName(name), offset, size)
+}
+
+type rewriteFunc func(t *testing.T, toc *JTOC, sgz *io.SectionReader)
+
+func rewriteTOCJSON(t *testing.T, sgz *io.SectionReader, rewrite rewriteFunc, controller TestingController) (newSgz io.Reader, tocDigest digest.Digest) {
+ decodedJTOC, jtocOffset, err := parseStargz(sgz, controller)
+ if err != nil {
+ t.Fatalf("failed to extract TOC JSON: %v", err)
+ }
+
+ rewrite(t, decodedJTOC, sgz)
+
+ tocFooter, tocDigest, err := tocAndFooter(controller, decodedJTOC, jtocOffset)
+ if err != nil {
+ t.Fatalf("failed to create toc and footer: %v", err)
+ }
+
+ // Reconstruct stargz file with the modified TOC JSON
+ if _, err := sgz.Seek(0, io.SeekStart); err != nil {
+ t.Fatalf("failed to reset the seek position of stargz: %v", err)
+ }
+ return io.MultiReader(
+ io.LimitReader(sgz, jtocOffset), // Original stargz (before TOC JSON)
+ tocFooter, // Rewritten TOC and footer
+ ), tocDigest
+}
+
+func listDigests(sgz *io.SectionReader, controller TestingController) (map[int64]digest.Digest, error) {
+ decodedJTOC, _, err := parseStargz(sgz, controller)
+ if err != nil {
+ return nil, err
+ }
+ digestMap := make(map[int64]digest.Digest)
+ for _, e := range decodedJTOC.Entries {
+ if e.Type == "reg" || e.Type == "chunk" {
+ if e.Type == "reg" && e.Size == 0 {
+ continue // ignores empty file
+ }
+ if e.ChunkDigest == "" {
+ return nil, fmt.Errorf("ChunkDigest of %q(off=%d) not found in TOC JSON",
+ e.Name, e.Offset)
+ }
+ d, err := digest.Parse(e.ChunkDigest)
+ if err != nil {
+ return nil, err
+ }
+ digestMap[e.Offset] = d
+ }
+ }
+ return digestMap, nil
+}
+
+func parseStargz(sgz *io.SectionReader, controller TestingController) (decodedJTOC *JTOC, jtocOffset int64, err error) {
+ fSize := controller.FooterSize()
+ footer := make([]byte, fSize)
+ if _, err := sgz.ReadAt(footer, sgz.Size()-fSize); err != nil {
+ return nil, 0, fmt.Errorf("error reading footer: %w", err)
+ }
+ _, tocOffset, _, err := controller.ParseFooter(footer[positive(int64(len(footer))-fSize):])
+ if err != nil {
+ return nil, 0, fmt.Errorf("failed to parse footer: %w", err)
+ }
+
+ // Decode the TOC JSON
+ var tocReader io.Reader
+ if tocOffset >= 0 {
+ tocReader = io.NewSectionReader(sgz, tocOffset, sgz.Size()-tocOffset-fSize)
+ }
+ decodedJTOC, _, err = controller.ParseTOC(tocReader)
+ if err != nil {
+ return nil, 0, fmt.Errorf("failed to parse TOC: %w", err)
+ }
+ return decodedJTOC, tocOffset, nil
+}
+
+func testWriteAndOpen(t *testing.T, controllers ...TestingControllerFactory) {
+ const content = "Some contents"
+ invalidUtf8 := "\xff\xfe\xfd"
+
+ xAttrFile := xAttr{"foo": "bar", "invalid-utf8": invalidUtf8}
+ sampleOwner := owner{uid: 50, gid: 100}
+
+ data64KB := randomContents(64000)
+
+ tests := []struct {
+ name string
+ chunkSize int
+ minChunkSize int
+ in []tarEntry
+ want []stargzCheck
+ wantNumGz int // expected number of streams
+
+ wantNumGzLossLess int // expected number of streams (> 0) in lossless mode if it's different from wantNumGz
+ wantFailOnLossLess bool
+ wantTOCVersion int // default = 1
+ }{
+ {
+ name: "empty",
+ in: tarOf(),
+ wantNumGz: 2, // (empty tar) + TOC + footer
+ want: checks(
+ numTOCEntries(0),
+ ),
+ },
+ {
+ name: "1dir_1empty_file",
+ in: tarOf(
+ dir("foo/"),
+ file("foo/bar.txt", ""),
+ ),
+ wantNumGz: 3, // dir, TOC, footer
+ want: checks(
+ numTOCEntries(2),
+ hasDir("foo/"),
+ hasFileLen("foo/bar.txt", 0),
+ entryHasChildren("foo", "bar.txt"),
+ hasFileDigest("foo/bar.txt", digestFor("")),
+ ),
+ },
+ {
+ name: "1dir_1file",
+ in: tarOf(
+ dir("foo/"),
+ file("foo/bar.txt", content, xAttrFile),
+ ),
+ wantNumGz: 4, // var dir, foo.txt alone, TOC, footer
+ want: checks(
+ numTOCEntries(2),
+ hasDir("foo/"),
+ hasFileLen("foo/bar.txt", len(content)),
+ hasFileDigest("foo/bar.txt", digestFor(content)),
+ hasFileContentsRange("foo/bar.txt", 0, content),
+ hasFileContentsRange("foo/bar.txt", 1, content[1:]),
+ entryHasChildren("", "foo"),
+ entryHasChildren("foo", "bar.txt"),
+ hasFileXattrs("foo/bar.txt", "foo", "bar"),
+ hasFileXattrs("foo/bar.txt", "invalid-utf8", invalidUtf8),
+ ),
+ },
+ {
+ name: "2meta_2file",
+ in: tarOf(
+ dir("bar/", sampleOwner),
+ dir("foo/", sampleOwner),
+ file("foo/bar.txt", content, sampleOwner),
+ ),
+ wantNumGz: 4, // both dirs, foo.txt alone, TOC, footer
+ want: checks(
+ numTOCEntries(3),
+ hasDir("bar/"),
+ hasDir("foo/"),
+ hasFileLen("foo/bar.txt", len(content)),
+ entryHasChildren("", "bar", "foo"),
+ entryHasChildren("foo", "bar.txt"),
+ hasChunkEntries("foo/bar.txt", 1),
+ hasEntryOwner("bar/", sampleOwner),
+ hasEntryOwner("foo/", sampleOwner),
+ hasEntryOwner("foo/bar.txt", sampleOwner),
+ ),
+ },
+ {
+ name: "3dir",
+ in: tarOf(
+ dir("bar/"),
+ dir("foo/"),
+ dir("foo/bar/"),
+ ),
+ wantNumGz: 3, // 3 dirs, TOC, footer
+ want: checks(
+ hasDirLinkCount("bar/", 2),
+ hasDirLinkCount("foo/", 3),
+ hasDirLinkCount("foo/bar/", 2),
+ ),
+ },
+ {
+ name: "symlink",
+ in: tarOf(
+ dir("foo/"),
+ symlink("foo/bar", "../../x"),
+ ),
+ wantNumGz: 3, // metas + TOC + footer
+ want: checks(
+ numTOCEntries(2),
+ hasSymlink("foo/bar", "../../x"),
+ entryHasChildren("", "foo"),
+ entryHasChildren("foo", "bar"),
+ ),
+ },
+ {
+ name: "chunked_file",
+ chunkSize: 4,
+ in: tarOf(
+ dir("foo/"),
+ file("foo/big.txt", "This "+"is s"+"uch "+"a bi"+"g fi"+"le"),
+ ),
+ wantNumGz: 9, // dir + big.txt(6 chunks) + TOC + footer
+ want: checks(
+ numTOCEntries(7), // 1 for foo dir, 6 for the foo/big.txt file
+ hasDir("foo/"),
+ hasFileLen("foo/big.txt", len("This is such a big file")),
+ hasFileDigest("foo/big.txt", digestFor("This is such a big file")),
+ hasFileContentsRange("foo/big.txt", 0, "This is such a big file"),
+ hasFileContentsRange("foo/big.txt", 1, "his is such a big file"),
+ hasFileContentsRange("foo/big.txt", 2, "is is such a big file"),
+ hasFileContentsRange("foo/big.txt", 3, "s is such a big file"),
+ hasFileContentsRange("foo/big.txt", 4, " is such a big file"),
+ hasFileContentsRange("foo/big.txt", 5, "is such a big file"),
+ hasFileContentsRange("foo/big.txt", 6, "s such a big file"),
+ hasFileContentsRange("foo/big.txt", 7, " such a big file"),
+ hasFileContentsRange("foo/big.txt", 8, "such a big file"),
+ hasFileContentsRange("foo/big.txt", 9, "uch a big file"),
+ hasFileContentsRange("foo/big.txt", 10, "ch a big file"),
+ hasFileContentsRange("foo/big.txt", 11, "h a big file"),
+ hasFileContentsRange("foo/big.txt", 12, " a big file"),
+ hasFileContentsRange("foo/big.txt", len("This is such a big file")-1, ""),
+ hasChunkEntries("foo/big.txt", 6),
+ ),
+ },
+ {
+ name: "recursive",
+ in: tarOf(
+ dir("/", sampleOwner),
+ dir("bar/", sampleOwner),
+ dir("foo/", sampleOwner),
+ file("foo/bar.txt", content, sampleOwner),
+ ),
+ wantNumGz: 4, // dirs, bar.txt alone, TOC, footer
+ want: checks(
+ maxDepth(2), // 0: root directory, 1: "foo/", 2: "bar.txt"
+ ),
+ },
+ {
+ name: "block_char_fifo",
+ in: tarOf(
+ tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Name: prefix + "b",
+ Typeflag: tar.TypeBlock,
+ Devmajor: 123,
+ Devminor: 456,
+ Format: format,
+ })
+ }),
+ tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Name: prefix + "c",
+ Typeflag: tar.TypeChar,
+ Devmajor: 111,
+ Devminor: 222,
+ Format: format,
+ })
+ }),
+ tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Name: prefix + "f",
+ Typeflag: tar.TypeFifo,
+ Format: format,
+ })
+ }),
+ ),
+ wantNumGz: 3,
+ want: checks(
+ lookupMatch("b", &TOCEntry{Name: "b", Type: "block", DevMajor: 123, DevMinor: 456, NumLink: 1}),
+ lookupMatch("c", &TOCEntry{Name: "c", Type: "char", DevMajor: 111, DevMinor: 222, NumLink: 1}),
+ lookupMatch("f", &TOCEntry{Name: "f", Type: "fifo", NumLink: 1}),
+ ),
+ },
+ {
+ name: "modes",
+ in: tarOf(
+ dir("foo1/", 0755|os.ModeDir|os.ModeSetgid),
+ file("foo1/bar1", content, 0700|os.ModeSetuid),
+ file("foo1/bar2", content, 0755|os.ModeSetgid),
+ dir("foo2/", 0755|os.ModeDir|os.ModeSticky),
+ file("foo2/bar3", content, 0755|os.ModeSticky),
+ dir("foo3/", 0755|os.ModeDir),
+ file("foo3/bar4", content, os.FileMode(0700)),
+ file("foo3/bar5", content, os.FileMode(0755)),
+ ),
+ wantNumGz: 8, // dir, bar1 alone, bar2 alone + dir, bar3 alone + dir, bar4 alone, bar5 alone, TOC, footer
+ want: checks(
+ hasMode("foo1/", 0755|os.ModeDir|os.ModeSetgid),
+ hasMode("foo1/bar1", 0700|os.ModeSetuid),
+ hasMode("foo1/bar2", 0755|os.ModeSetgid),
+ hasMode("foo2/", 0755|os.ModeDir|os.ModeSticky),
+ hasMode("foo2/bar3", 0755|os.ModeSticky),
+ hasMode("foo3/", 0755|os.ModeDir),
+ hasMode("foo3/bar4", os.FileMode(0700)),
+ hasMode("foo3/bar5", os.FileMode(0755)),
+ ),
+ },
+ {
+ name: "lossy",
+ in: tarOf(
+ dir("bar/", sampleOwner),
+ dir("foo/", sampleOwner),
+ file("foo/bar.txt", content, sampleOwner),
+ file(TOCTarName, "dummy"), // ignored by the writer. (lossless write returns error)
+ ),
+ wantNumGz: 4, // both dirs, foo.txt alone, TOC, footer
+ want: checks(
+ numTOCEntries(3),
+ hasDir("bar/"),
+ hasDir("foo/"),
+ hasFileLen("foo/bar.txt", len(content)),
+ entryHasChildren("", "bar", "foo"),
+ entryHasChildren("foo", "bar.txt"),
+ hasChunkEntries("foo/bar.txt", 1),
+ hasEntryOwner("bar/", sampleOwner),
+ hasEntryOwner("foo/", sampleOwner),
+ hasEntryOwner("foo/bar.txt", sampleOwner),
+ ),
+ wantFailOnLossLess: true,
+ },
+ {
+ name: "hardlink should be replaced to the destination entry",
+ in: tarOf(
+ dir("foo/"),
+ file("foo/foo1", "test"),
+ link("foolink", "foo/foo1"),
+ ),
+ wantNumGz: 4, // dir, foo1 + link, TOC, footer
+ want: checks(
+ mustSameEntry("foo/foo1", "foolink"),
+ ),
+ },
+ {
+ name: "several_files_in_chunk",
+ minChunkSize: 8000,
+ in: tarOf(
+ dir("foo/"),
+ file("foo/foo1", data64KB),
+ file("foo2", "bb"),
+ file("foo22", "ccc"),
+ dir("bar/"),
+ file("bar/bar.txt", "aaa"),
+ file("foo3", data64KB),
+ ),
+ // NOTE: we assume that the compressed "data64KB" is still larger than 8KB
+ wantNumGz: 4, // dir+foo1, foo2+foo22+dir+bar.txt+foo3, TOC, footer
+ want: checks(
+ numTOCEntries(7), // dir, foo1, foo2, foo22, dir, bar.txt, foo3
+ hasDir("foo/"),
+ hasDir("bar/"),
+ hasFileLen("foo/foo1", len(data64KB)),
+ hasFileLen("foo2", len("bb")),
+ hasFileLen("foo22", len("ccc")),
+ hasFileLen("bar/bar.txt", len("aaa")),
+ hasFileLen("foo3", len(data64KB)),
+ hasFileDigest("foo/foo1", digestFor(data64KB)),
+ hasFileDigest("foo2", digestFor("bb")),
+ hasFileDigest("foo22", digestFor("ccc")),
+ hasFileDigest("bar/bar.txt", digestFor("aaa")),
+ hasFileDigest("foo3", digestFor(data64KB)),
+ hasFileContentsWithPreRead("foo22", 0, "ccc", chunkInfo{"foo2", "bb"}, chunkInfo{"bar/bar.txt", "aaa"}, chunkInfo{"foo3", data64KB}),
+ hasFileContentsRange("foo/foo1", 0, data64KB),
+ hasFileContentsRange("foo2", 0, "bb"),
+ hasFileContentsRange("foo2", 1, "b"),
+ hasFileContentsRange("foo22", 0, "ccc"),
+ hasFileContentsRange("foo22", 1, "cc"),
+ hasFileContentsRange("foo22", 2, "c"),
+ hasFileContentsRange("bar/bar.txt", 0, "aaa"),
+ hasFileContentsRange("bar/bar.txt", 1, "aa"),
+ hasFileContentsRange("bar/bar.txt", 2, "a"),
+ hasFileContentsRange("foo3", 0, data64KB),
+ hasFileContentsRange("foo3", 1, data64KB[1:]),
+ hasFileContentsRange("foo3", 2, data64KB[2:]),
+ hasFileContentsRange("foo3", len(data64KB)/2, data64KB[len(data64KB)/2:]),
+ hasFileContentsRange("foo3", len(data64KB)-1, data64KB[len(data64KB)-1:]),
+ ),
+ },
+ {
+ name: "several_files_in_chunk_chunked",
+ minChunkSize: 8000,
+ chunkSize: 32000,
+ in: tarOf(
+ dir("foo/"),
+ file("foo/foo1", data64KB),
+ file("foo2", "bb"),
+ dir("bar/"),
+ file("foo3", data64KB),
+ ),
+ // NOTE: we assume that the compressed chunk of "data64KB" is still larger than 8KB
+ wantNumGz: 6, // dir+foo1(1), foo1(2), foo2+dir+foo3(1), foo3(2), TOC, footer
+ want: checks(
+ numTOCEntries(7), // dir, foo1(2 chunks), foo2, dir, foo3(2 chunks)
+ hasDir("foo/"),
+ hasDir("bar/"),
+ hasFileLen("foo/foo1", len(data64KB)),
+ hasFileLen("foo2", len("bb")),
+ hasFileLen("foo3", len(data64KB)),
+ hasFileDigest("foo/foo1", digestFor(data64KB)),
+ hasFileDigest("foo2", digestFor("bb")),
+ hasFileDigest("foo3", digestFor(data64KB)),
+ hasFileContentsWithPreRead("foo2", 0, "bb", chunkInfo{"foo3", data64KB[:32000]}),
+ hasFileContentsRange("foo/foo1", 0, data64KB),
+ hasFileContentsRange("foo/foo1", 1, data64KB[1:]),
+ hasFileContentsRange("foo/foo1", 2, data64KB[2:]),
+ hasFileContentsRange("foo/foo1", len(data64KB)/2, data64KB[len(data64KB)/2:]),
+ hasFileContentsRange("foo/foo1", len(data64KB)-1, data64KB[len(data64KB)-1:]),
+ hasFileContentsRange("foo2", 0, "bb"),
+ hasFileContentsRange("foo2", 1, "b"),
+ hasFileContentsRange("foo3", 0, data64KB),
+ hasFileContentsRange("foo3", 1, data64KB[1:]),
+ hasFileContentsRange("foo3", 2, data64KB[2:]),
+ hasFileContentsRange("foo3", len(data64KB)/2, data64KB[len(data64KB)/2:]),
+ hasFileContentsRange("foo3", len(data64KB)-1, data64KB[len(data64KB)-1:]),
+ ),
+ },
+ }
+
+ for _, tt := range tests {
+ for _, newCL := range controllers {
+ newCL := newCL
+ for _, prefix := range allowedPrefix {
+ prefix := prefix
+ for _, srcTarFormat := range []tar.Format{tar.FormatUSTAR, tar.FormatPAX, tar.FormatGNU} {
+ srcTarFormat := srcTarFormat
+ for _, lossless := range []bool{true, false} {
+ t.Run(tt.name+"-"+fmt.Sprintf("compression=%v,prefix=%q,lossless=%v,format=%s", newCL(), prefix, lossless, srcTarFormat), func(t *testing.T) {
+ var tr io.Reader = buildTar(t, tt.in, prefix, srcTarFormat)
+ origTarDgstr := digest.Canonical.Digester()
+ tr = io.TeeReader(tr, origTarDgstr.Hash())
+ var stargzBuf bytes.Buffer
+ cl1 := newCL()
+ w := NewWriterWithCompressor(&stargzBuf, cl1)
+ w.ChunkSize = tt.chunkSize
+ w.MinChunkSize = tt.minChunkSize
+ if lossless {
+ err := w.AppendTarLossLess(tr)
+ if tt.wantFailOnLossLess {
+ if err != nil {
+ return // expected to fail
+ }
+ t.Fatalf("Append wanted to fail on lossless")
+ }
+ if err != nil {
+ t.Fatalf("Append(lossless): %v", err)
+ }
+ } else {
+ if err := w.AppendTar(tr); err != nil {
+ t.Fatalf("Append: %v", err)
+ }
+ }
+ if _, err := w.Close(); err != nil {
+ t.Fatalf("Writer.Close: %v", err)
+ }
+ b := stargzBuf.Bytes()
+
+ if lossless {
+ // Check if the result blob reserves original tar metadata
+ rc, err := Unpack(io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b))), cl1)
+ if err != nil {
+ t.Errorf("failed to decompress blob: %v", err)
+ return
+ }
+ defer rc.Close()
+ resultDgstr := digest.Canonical.Digester()
+ if _, err := io.Copy(resultDgstr.Hash(), rc); err != nil {
+ t.Errorf("failed to read result decompressed blob: %v", err)
+ return
+ }
+ if resultDgstr.Digest() != origTarDgstr.Digest() {
+ t.Errorf("lossy compression occurred: digest=%v; want %v",
+ resultDgstr.Digest(), origTarDgstr.Digest())
+ return
+ }
+ }
+
+ diffID := w.DiffID()
+ wantDiffID := cl1.DiffIDOf(t, b)
+ if diffID != wantDiffID {
+ t.Errorf("DiffID = %q; want %q", diffID, wantDiffID)
+ }
+
+ telemetry, checkCalled := newCalledTelemetry()
+ sr := io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b)))
+ r, err := Open(
+ sr,
+ WithDecompressors(cl1),
+ WithTelemetry(telemetry),
+ )
+ if err != nil {
+ t.Fatalf("stargz.Open: %v", err)
+ }
+ wantTOCVersion := 1
+ if tt.wantTOCVersion > 0 {
+ wantTOCVersion = tt.wantTOCVersion
+ }
+ if r.toc.Version != wantTOCVersion {
+ t.Fatalf("invalid TOC Version %d; wanted %d", r.toc.Version, wantTOCVersion)
+ }
+
+ footerSize := cl1.FooterSize()
+ footerOffset := sr.Size() - footerSize
+ footer := make([]byte, footerSize)
+ if _, err := sr.ReadAt(footer, footerOffset); err != nil {
+ t.Errorf("failed to read footer: %v", err)
+ }
+ _, tocOffset, _, err := cl1.ParseFooter(footer)
+ if err != nil {
+ t.Errorf("failed to parse footer: %v", err)
+ }
+ if err := checkCalled(tocOffset >= 0); err != nil {
+ t.Errorf("telemetry failure: %v", err)
+ }
+
+ wantNumGz := tt.wantNumGz
+ if lossless && tt.wantNumGzLossLess > 0 {
+ wantNumGz = tt.wantNumGzLossLess
+ }
+ streamOffsets := []int64{0}
+ prevOffset := int64(-1)
+ streams := 0
+ for _, e := range r.toc.Entries {
+ if e.Offset > prevOffset {
+ streamOffsets = append(streamOffsets, e.Offset)
+ prevOffset = e.Offset
+ streams++
+ }
+ }
+ streams++ // TOC
+ if tocOffset >= 0 {
+ // toc is in the blob
+ streamOffsets = append(streamOffsets, tocOffset)
+ }
+ streams++ // footer
+ streamOffsets = append(streamOffsets, footerOffset)
+ if streams != wantNumGz {
+ t.Errorf("number of streams in TOC = %d; want %d", streams, wantNumGz)
+ }
+
+ t.Logf("testing streams: %+v", streamOffsets)
+ cl1.TestStreams(t, b, streamOffsets)
+
+ for _, want := range tt.want {
+ want.check(t, r)
+ }
+ })
+ }
+ }
+ }
+ }
+ }
+}
+
+type chunkInfo struct {
+ name string
+ data string
+}
+
+func newCalledTelemetry() (telemetry *Telemetry, check func(needsGetTOC bool) error) {
+ var getFooterLatencyCalled bool
+ var getTocLatencyCalled bool
+ var deserializeTocLatencyCalled bool
+ return &Telemetry{
+ func(time.Time) { getFooterLatencyCalled = true },
+ func(time.Time) { getTocLatencyCalled = true },
+ func(time.Time) { deserializeTocLatencyCalled = true },
+ }, func(needsGetTOC bool) error {
+ var allErr []error
+ if !getFooterLatencyCalled {
+ allErr = append(allErr, fmt.Errorf("metrics GetFooterLatency isn't called"))
+ }
+ if needsGetTOC {
+ if !getTocLatencyCalled {
+ allErr = append(allErr, fmt.Errorf("metrics GetTocLatency isn't called"))
+ }
+ }
+ if !deserializeTocLatencyCalled {
+ allErr = append(allErr, fmt.Errorf("metrics DeserializeTocLatency isn't called"))
+ }
+ return errorutil.Aggregate(allErr)
+ }
+}
+
+func digestFor(content string) string {
+ sum := sha256.Sum256([]byte(content))
+ return fmt.Sprintf("sha256:%x", sum)
+}
+
+type numTOCEntries int
+
+func (n numTOCEntries) check(t *testing.T, r *Reader) {
+ if r.toc == nil {
+ t.Fatal("nil TOC")
+ }
+ if got, want := len(r.toc.Entries), int(n); got != want {
+ t.Errorf("got %d TOC entries; want %d", got, want)
+ }
+ t.Logf("got TOC entries:")
+ for i, ent := range r.toc.Entries {
+ entj, _ := json.Marshal(ent)
+ t.Logf(" [%d]: %s\n", i, entj)
+ }
+ if t.Failed() {
+ t.FailNow()
+ }
+}
+
+func checks(s ...stargzCheck) []stargzCheck { return s }
+
+type stargzCheck interface {
+ check(t *testing.T, r *Reader)
+}
+
+type stargzCheckFn func(*testing.T, *Reader)
+
+func (f stargzCheckFn) check(t *testing.T, r *Reader) { f(t, r) }
+
+func maxDepth(max int) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ e, ok := r.Lookup("")
+ if !ok {
+ t.Fatal("root directory not found")
+ }
+ d, err := getMaxDepth(t, e, 0, 10*max)
+ if err != nil {
+ t.Errorf("failed to get max depth (wanted %d): %v", max, err)
+ return
+ }
+ if d != max {
+ t.Errorf("invalid depth %d; want %d", d, max)
+ return
+ }
+ })
+}
+
+func getMaxDepth(t *testing.T, e *TOCEntry, current, limit int) (max int, rErr error) {
+ if current > limit {
+ return -1, fmt.Errorf("walkMaxDepth: exceeds limit: current:%d > limit:%d",
+ current, limit)
+ }
+ max = current
+ e.ForeachChild(func(baseName string, ent *TOCEntry) bool {
+ t.Logf("%q(basename:%q) is child of %q\n", ent.Name, baseName, e.Name)
+ d, err := getMaxDepth(t, ent, current+1, limit)
+ if err != nil {
+ rErr = err
+ return false
+ }
+ if d > max {
+ max = d
+ }
+ return true
+ })
+ return
+}
+
+func hasFileLen(file string, wantLen int) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == file {
+ if ent.Type != "reg" {
+ t.Errorf("file type of %q is %q; want \"reg\"", file, ent.Type)
+ } else if ent.Size != int64(wantLen) {
+ t.Errorf("file size of %q = %d; want %d", file, ent.Size, wantLen)
+ }
+ return
+ }
+ }
+ t.Errorf("file %q not found", file)
+ })
+}
+
+func hasFileXattrs(file, name, value string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == file {
+ if ent.Type != "reg" {
+ t.Errorf("file type of %q is %q; want \"reg\"", file, ent.Type)
+ }
+ if ent.Xattrs == nil {
+ t.Errorf("file %q has no xattrs", file)
+ return
+ }
+ valueFound, found := ent.Xattrs[name]
+ if !found {
+ t.Errorf("file %q has no xattr %q", file, name)
+ return
+ }
+ if string(valueFound) != value {
+ t.Errorf("file %q has xattr %q with value %q instead of %q", file, name, valueFound, value)
+ }
+
+ return
+ }
+ }
+ t.Errorf("file %q not found", file)
+ })
+}
+
+func hasFileDigest(file string, digest string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ ent, ok := r.Lookup(file)
+ if !ok {
+ t.Fatalf("didn't find TOCEntry for file %q", file)
+ }
+ if ent.Digest != digest {
+ t.Fatalf("Digest(%q) = %q, want %q", file, ent.Digest, digest)
+ }
+ })
+}
+
+func hasFileContentsWithPreRead(file string, offset int, want string, extra ...chunkInfo) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ extraMap := make(map[string]chunkInfo)
+ for _, e := range extra {
+ extraMap[e.name] = e
+ }
+ var extraNames []string
+ for n := range extraMap {
+ extraNames = append(extraNames, n)
+ }
+ f, err := r.OpenFileWithPreReader(file, func(e *TOCEntry, cr io.Reader) error {
+ t.Logf("On %q: got preread of %q", file, e.Name)
+ ex, ok := extraMap[e.Name]
+ if !ok {
+ t.Fatalf("fail on %q: unexpected entry %q: %+v, %+v", file, e.Name, e, extraNames)
+ }
+ got, err := io.ReadAll(cr)
+ if err != nil {
+ t.Fatalf("fail on %q: failed to read %q: %v", file, e.Name, err)
+ }
+ if ex.data != string(got) {
+ t.Fatalf("fail on %q: unexpected contents of %q: len=%d; want=%d", file, e.Name, len(got), len(ex.data))
+ }
+ delete(extraMap, e.Name)
+ return nil
+ })
+ if err != nil {
+ t.Fatal(err)
+ }
+ got := make([]byte, len(want))
+ n, err := f.ReadAt(got, int64(offset))
+ if err != nil {
+ t.Fatalf("ReadAt(len %d, offset %d, size %d) = %v, %v", len(got), offset, f.Size(), n, err)
+ }
+ if string(got) != want {
+ t.Fatalf("ReadAt(len %d, offset %d) = %q, want %q", len(got), offset, viewContent(got), viewContent([]byte(want)))
+ }
+ if len(extraMap) != 0 {
+ var exNames []string
+ for _, ex := range extraMap {
+ exNames = append(exNames, ex.name)
+ }
+ t.Fatalf("fail on %q: some entries aren't read: %+v", file, exNames)
+ }
+ })
+}
+
+func hasFileContentsRange(file string, offset int, want string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ f, err := r.OpenFile(file)
+ if err != nil {
+ t.Fatal(err)
+ }
+ got := make([]byte, len(want))
+ n, err := f.ReadAt(got, int64(offset))
+ if err != nil {
+ t.Fatalf("ReadAt(len %d, offset %d) = %v, %v", len(got), offset, n, err)
+ }
+ if string(got) != want {
+ t.Fatalf("ReadAt(len %d, offset %d) = %q, want %q", len(got), offset, viewContent(got), viewContent([]byte(want)))
+ }
+ })
+}
+
+func hasChunkEntries(file string, wantChunks int) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ ent, ok := r.Lookup(file)
+ if !ok {
+ t.Fatalf("no file for %q", file)
+ }
+ if ent.Type != "reg" {
+ t.Fatalf("file %q has unexpected type %q; want reg", file, ent.Type)
+ }
+ chunks := r.getChunks(ent)
+ if len(chunks) != wantChunks {
+ t.Errorf("len(r.getChunks(%q)) = %d; want %d", file, len(chunks), wantChunks)
+ return
+ }
+ f := chunks[0]
+
+ var gotChunks []*TOCEntry
+ var last *TOCEntry
+ for off := int64(0); off < f.Size; off++ {
+ e, ok := r.ChunkEntryForOffset(file, off)
+ if !ok {
+ t.Errorf("no ChunkEntryForOffset at %d", off)
+ return
+ }
+ if last != e {
+ gotChunks = append(gotChunks, e)
+ last = e
+ }
+ }
+ if !reflect.DeepEqual(chunks, gotChunks) {
+ t.Errorf("gotChunks=%d, want=%d; contents mismatch", len(gotChunks), wantChunks)
+ }
+
+ // And verify the NextOffset
+ for i := 0; i < len(gotChunks)-1; i++ {
+ ci := gotChunks[i]
+ cnext := gotChunks[i+1]
+ if ci.NextOffset() != cnext.Offset {
+ t.Errorf("chunk %d NextOffset %d != next chunk's Offset of %d", i, ci.NextOffset(), cnext.Offset)
+ }
+ }
+ })
+}
+
+func entryHasChildren(dir string, want ...string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ want := append([]string(nil), want...)
+ var got []string
+ ent, ok := r.Lookup(dir)
+ if !ok {
+ t.Fatalf("didn't find TOCEntry for dir node %q", dir)
+ }
+ for baseName := range ent.children {
+ got = append(got, baseName)
+ }
+ sort.Strings(got)
+ sort.Strings(want)
+ if !reflect.DeepEqual(got, want) {
+ t.Errorf("children of %q = %q; want %q", dir, got, want)
+ }
+ })
+}
+
+func hasDir(file string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == cleanEntryName(file) {
+ if ent.Type != "dir" {
+ t.Errorf("file type of %q is %q; want \"dir\"", file, ent.Type)
+ }
+ return
+ }
+ }
+ t.Errorf("directory %q not found", file)
+ })
+}
+
+func hasDirLinkCount(file string, count int) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == cleanEntryName(file) {
+ if ent.Type != "dir" {
+ t.Errorf("file type of %q is %q; want \"dir\"", file, ent.Type)
+ return
+ }
+ if ent.NumLink != count {
+ t.Errorf("link count of %q = %d; want %d", file, ent.NumLink, count)
+ }
+ return
+ }
+ }
+ t.Errorf("directory %q not found", file)
+ })
+}
+
+func hasMode(file string, mode os.FileMode) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == cleanEntryName(file) {
+ if ent.Stat().Mode() != mode {
+ t.Errorf("invalid mode: got %v; want %v", ent.Stat().Mode(), mode)
+ return
+ }
+ return
+ }
+ }
+ t.Errorf("file %q not found", file)
+ })
+}
+
+func hasSymlink(file, target string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ for _, ent := range r.toc.Entries {
+ if ent.Name == file {
+ if ent.Type != "symlink" {
+ t.Errorf("file type of %q is %q; want \"symlink\"", file, ent.Type)
+ } else if ent.LinkName != target {
+ t.Errorf("link target of symlink %q is %q; want %q", file, ent.LinkName, target)
+ }
+ return
+ }
+ }
+ t.Errorf("symlink %q not found", file)
+ })
+}
+
+func lookupMatch(name string, want *TOCEntry) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ e, ok := r.Lookup(name)
+ if !ok {
+ t.Fatalf("failed to Lookup entry %q", name)
+ }
+ if !reflect.DeepEqual(e, want) {
+ t.Errorf("entry %q mismatch.\n got: %+v\nwant: %+v\n", name, e, want)
+ }
+
+ })
+}
+
+func hasEntryOwner(entry string, owner owner) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ ent, ok := r.Lookup(strings.TrimSuffix(entry, "/"))
+ if !ok {
+ t.Errorf("entry %q not found", entry)
+ return
+ }
+ if ent.UID != owner.uid || ent.GID != owner.gid {
+ t.Errorf("entry %q has invalid owner (uid:%d, gid:%d) instead of (uid:%d, gid:%d)", entry, ent.UID, ent.GID, owner.uid, owner.gid)
+ return
+ }
+ })
+}
+
+func mustSameEntry(files ...string) stargzCheck {
+ return stargzCheckFn(func(t *testing.T, r *Reader) {
+ var first *TOCEntry
+ for _, f := range files {
+ if first == nil {
+ var ok bool
+ first, ok = r.Lookup(f)
+ if !ok {
+ t.Errorf("unknown first file on Lookup: %q", f)
+ return
+ }
+ }
+
+ // Test Lookup
+ e, ok := r.Lookup(f)
+ if !ok {
+ t.Errorf("unknown file on Lookup: %q", f)
+ return
+ }
+ if e != first {
+ t.Errorf("Lookup: %+v(%p) != %+v(%p)", e, e, first, first)
+ return
+ }
+
+ // Test LookupChild
+ pe, ok := r.Lookup(filepath.Dir(filepath.Clean(f)))
+ if !ok {
+ t.Errorf("failed to get parent of %q", f)
+ return
+ }
+ e, ok = pe.LookupChild(filepath.Base(filepath.Clean(f)))
+ if !ok {
+ t.Errorf("failed to get %q as the child of %+v", f, pe)
+ return
+ }
+ if e != first {
+ t.Errorf("LookupChild: %+v(%p) != %+v(%p)", e, e, first, first)
+ return
+ }
+
+ // Test ForeachChild
+ pe.ForeachChild(func(baseName string, e *TOCEntry) bool {
+ if baseName == filepath.Base(filepath.Clean(f)) {
+ if e != first {
+ t.Errorf("ForeachChild: %+v(%p) != %+v(%p)", e, e, first, first)
+ return false
+ }
+ }
+ return true
+ })
+ }
+ })
+}
+
+func viewContent(c []byte) string {
+ if len(c) < 100 {
+ return string(c)
+ }
+ return string(c[:50]) + "...(omit)..." + string(c[50:100])
+}
+
+func tarOf(s ...tarEntry) []tarEntry { return s }
+
+type tarEntry interface {
+ appendTar(tw *tar.Writer, prefix string, format tar.Format) error
+}
+
+type tarEntryFunc func(*tar.Writer, string, tar.Format) error
+
+func (f tarEntryFunc) appendTar(tw *tar.Writer, prefix string, format tar.Format) error {
+ return f(tw, prefix, format)
+}
+
+func buildTar(t *testing.T, ents []tarEntry, prefix string, opts ...interface{}) *io.SectionReader {
+ format := tar.FormatUnknown
+ for _, opt := range opts {
+ switch v := opt.(type) {
+ case tar.Format:
+ format = v
+ default:
+ panic(fmt.Errorf("unsupported opt for buildTar: %v", opt))
+ }
+ }
+ buf := new(bytes.Buffer)
+ tw := tar.NewWriter(buf)
+ for _, ent := range ents {
+ if err := ent.appendTar(tw, prefix, format); err != nil {
+ t.Fatalf("building input tar: %v", err)
+ }
+ }
+ if err := tw.Close(); err != nil {
+ t.Errorf("closing write of input tar: %v", err)
+ }
+ data := append(buf.Bytes(), make([]byte, 100)...) // append empty bytes at the tail to see lossless works
+ return io.NewSectionReader(bytes.NewReader(data), 0, int64(len(data)))
+}
+
+func dir(name string, opts ...interface{}) tarEntry {
+ return tarEntryFunc(func(tw *tar.Writer, prefix string, format tar.Format) error {
+ var o owner
+ mode := os.FileMode(0755)
+ for _, opt := range opts {
+ switch v := opt.(type) {
+ case owner:
+ o = v
+ case os.FileMode:
+ mode = v
+ default:
+ return errors.New("unsupported opt")
+ }
+ }
+ if !strings.HasSuffix(name, "/") {
+ panic(fmt.Sprintf("missing trailing slash in dir %q ", name))
+ }
+ tm, err := fileModeToTarMode(mode)
+ if err != nil {
+ return err
+ }
+ return tw.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeDir,
+ Name: prefix + name,
+ Mode: tm,
+ Uid: o.uid,
+ Gid: o.gid,
+ Format: format,
+ })
+ })
+}
+
+// xAttr are extended attributes to set on test files created with the file func.
+type xAttr map[string]string
+
+// owner is owner ot set on test files and directories with the file and dir functions.
+type owner struct {
+ uid int
+ gid int
+}
+
+func file(name, contents string, opts ...interface{}) tarEntry {
+ return tarEntryFunc(func(tw *tar.Writer, prefix string, format tar.Format) error {
+ var xattrs xAttr
+ var o owner
+ mode := os.FileMode(0644)
+ for _, opt := range opts {
+ switch v := opt.(type) {
+ case xAttr:
+ xattrs = v
+ case owner:
+ o = v
+ case os.FileMode:
+ mode = v
+ default:
+ return errors.New("unsupported opt")
+ }
+ }
+ if strings.HasSuffix(name, "/") {
+ return fmt.Errorf("bogus trailing slash in file %q", name)
+ }
+ tm, err := fileModeToTarMode(mode)
+ if err != nil {
+ return err
+ }
+ if len(xattrs) > 0 {
+ format = tar.FormatPAX // only PAX supports xattrs
+ }
+ if err := tw.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeReg,
+ Name: prefix + name,
+ Mode: tm,
+ Xattrs: xattrs,
+ Size: int64(len(contents)),
+ Uid: o.uid,
+ Gid: o.gid,
+ Format: format,
+ }); err != nil {
+ return err
+ }
+ _, err = io.WriteString(tw, contents)
+ return err
+ })
+}
+
+func symlink(name, target string) tarEntry {
+ return tarEntryFunc(func(tw *tar.Writer, prefix string, format tar.Format) error {
+ return tw.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeSymlink,
+ Name: prefix + name,
+ Linkname: target,
+ Mode: 0644,
+ Format: format,
+ })
+ })
+}
+
+func link(name string, linkname string) tarEntry {
+ now := time.Now()
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeLink,
+ Name: prefix + name,
+ Linkname: linkname,
+ ModTime: now,
+ Format: format,
+ })
+ })
+}
+
+func chardev(name string, major, minor int64) tarEntry {
+ now := time.Now()
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeChar,
+ Name: prefix + name,
+ Devmajor: major,
+ Devminor: minor,
+ ModTime: now,
+ Format: format,
+ })
+ })
+}
+
+func blockdev(name string, major, minor int64) tarEntry {
+ now := time.Now()
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeBlock,
+ Name: prefix + name,
+ Devmajor: major,
+ Devminor: minor,
+ ModTime: now,
+ Format: format,
+ })
+ })
+}
+func fifo(name string) tarEntry {
+ now := time.Now()
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ return w.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeFifo,
+ Name: prefix + name,
+ ModTime: now,
+ Format: format,
+ })
+ })
+}
+
+func prefetchLandmark() tarEntry {
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ if err := w.WriteHeader(&tar.Header{
+ Name: PrefetchLandmark,
+ Typeflag: tar.TypeReg,
+ Size: int64(len([]byte{landmarkContents})),
+ Format: format,
+ }); err != nil {
+ return err
+ }
+ contents := []byte{landmarkContents}
+ if _, err := io.CopyN(w, bytes.NewReader(contents), int64(len(contents))); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func noPrefetchLandmark() tarEntry {
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ if err := w.WriteHeader(&tar.Header{
+ Name: NoPrefetchLandmark,
+ Typeflag: tar.TypeReg,
+ Size: int64(len([]byte{landmarkContents})),
+ Format: format,
+ }); err != nil {
+ return err
+ }
+ contents := []byte{landmarkContents}
+ if _, err := io.CopyN(w, bytes.NewReader(contents), int64(len(contents))); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+func regDigest(t *testing.T, name string, contentStr string, digestMap map[string]digest.Digest) tarEntry {
+ if digestMap == nil {
+ t.Fatalf("digest map mustn't be nil")
+ }
+ content := []byte(contentStr)
+
+ var n int64
+ for n < int64(len(content)) {
+ size := int64(chunkSize)
+ remain := int64(len(content)) - n
+ if remain < size {
+ size = remain
+ }
+ dgstr := digest.Canonical.Digester()
+ if _, err := io.CopyN(dgstr.Hash(), bytes.NewReader(content[n:n+size]), size); err != nil {
+ t.Fatalf("failed to calculate digest of %q (name=%q,offset=%d,size=%d)",
+ string(content[n:n+size]), name, n, size)
+ }
+ digestMap[chunkID(name, n, size)] = dgstr.Digest()
+ n += size
+ }
+
+ return tarEntryFunc(func(w *tar.Writer, prefix string, format tar.Format) error {
+ if err := w.WriteHeader(&tar.Header{
+ Typeflag: tar.TypeReg,
+ Name: prefix + name,
+ Size: int64(len(content)),
+ Format: format,
+ }); err != nil {
+ return err
+ }
+ if _, err := io.CopyN(w, bytes.NewReader(content), int64(len(content))); err != nil {
+ return err
+ }
+ return nil
+ })
+}
+
+var runes = []rune("1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
+
+func randomContents(n int) string {
+ b := make([]rune, n)
+ for i := range b {
+ b[i] = runes[rand.Intn(len(runes))]
+ }
+ return string(b)
+}
+
+func fileModeToTarMode(mode os.FileMode) (int64, error) {
+ h, err := tar.FileInfoHeader(fileInfoOnlyMode(mode), "")
+ if err != nil {
+ return 0, err
+ }
+ return h.Mode, nil
+}
+
+// fileInfoOnlyMode is os.FileMode that populates only file mode.
+type fileInfoOnlyMode os.FileMode
+
+func (f fileInfoOnlyMode) Name() string { return "" }
+func (f fileInfoOnlyMode) Size() int64 { return 0 }
+func (f fileInfoOnlyMode) Mode() os.FileMode { return os.FileMode(f) }
+func (f fileInfoOnlyMode) ModTime() time.Time { return time.Now() }
+func (f fileInfoOnlyMode) IsDir() bool { return os.FileMode(f).IsDir() }
+func (f fileInfoOnlyMode) Sys() interface{} { return nil }
+
+func CheckGzipHasStreams(t *testing.T, b []byte, streams []int64) {
+ if len(streams) == 0 {
+ return // nop
+ }
+
+ wants := map[int64]struct{}{}
+ for _, s := range streams {
+ wants[s] = struct{}{}
+ }
+
+ len0 := len(b)
+ br := bytes.NewReader(b)
+ zr := new(gzip.Reader)
+ t.Logf("got gzip streams:")
+ numStreams := 0
+ for {
+ zoff := len0 - br.Len()
+ if err := zr.Reset(br); err != nil {
+ if err == io.EOF {
+ return
+ }
+ t.Fatalf("countStreams(gzip), Reset: %v", err)
+ }
+ zr.Multistream(false)
+ n, err := io.Copy(io.Discard, zr)
+ if err != nil {
+ t.Fatalf("countStreams(gzip), Copy: %v", err)
+ }
+ var extra string
+ if len(zr.Header.Extra) > 0 {
+ extra = fmt.Sprintf("; extra=%q", zr.Header.Extra)
+ }
+ t.Logf(" [%d] at %d in stargz, uncompressed length %d%s", numStreams, zoff, n, extra)
+ delete(wants, int64(zoff))
+ numStreams++
+ }
+}
+
+func GzipDiffIDOf(t *testing.T, b []byte) string {
+ h := sha256.New()
+ zr, err := gzip.NewReader(bytes.NewReader(b))
+ if err != nil {
+ t.Fatalf("diffIDOf(gzip): %v", err)
+ }
+ defer zr.Close()
+ if _, err := io.Copy(h, zr); err != nil {
+ t.Fatalf("diffIDOf(gzip).Copy: %v", err)
+ }
+ return fmt.Sprintf("sha256:%x", h.Sum(nil))
+}
diff --git a/vendor/github.com/containerd/stargz-snapshotter/estargz/types.go b/vendor/github.com/containerd/stargz-snapshotter/estargz/types.go
new file mode 100644
index 000000000000..57e0aa614e4a
--- /dev/null
+++ b/vendor/github.com/containerd/stargz-snapshotter/estargz/types.go
@@ -0,0 +1,342 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+/*
+ Copyright 2019 The Go Authors. All rights reserved.
+ Use of this source code is governed by a BSD-style
+ license that can be found in the LICENSE file.
+*/
+
+package estargz
+
+import (
+ "archive/tar"
+ "hash"
+ "io"
+ "os"
+ "path"
+ "time"
+
+ digest "github.com/opencontainers/go-digest"
+)
+
+const (
+ // TOCTarName is the name of the JSON file in the tar archive in the
+ // table of contents gzip stream.
+ TOCTarName = "stargz.index.json"
+
+ // FooterSize is the number of bytes in the footer
+ //
+ // The footer is an empty gzip stream with no compression and an Extra
+ // header of the form "%016xSTARGZ", where the 64 bit hex-encoded
+ // number is the offset to the gzip stream of JSON TOC.
+ //
+ // 51 comes from:
+ //
+ // 10 bytes gzip header
+ // 2 bytes XLEN (length of Extra field) = 26 (4 bytes header + 16 hex digits + len("STARGZ"))
+ // 2 bytes Extra: SI1 = 'S', SI2 = 'G'
+ // 2 bytes Extra: LEN = 22 (16 hex digits + len("STARGZ"))
+ // 22 bytes Extra: subfield = fmt.Sprintf("%016xSTARGZ", offsetOfTOC)
+ // 5 bytes flate header
+ // 8 bytes gzip footer
+ // (End of the eStargz blob)
+ //
+ // NOTE: For Extra fields, subfield IDs SI1='S' SI2='G' is used for eStargz.
+ FooterSize = 51
+
+ // legacyFooterSize is the number of bytes in the legacy stargz footer.
+ //
+ // 47 comes from:
+ //
+ // 10 byte gzip header +
+ // 2 byte (LE16) length of extra, encoding 22 (16 hex digits + len("STARGZ")) == "\x16\x00" +
+ // 22 bytes of extra (fmt.Sprintf("%016xSTARGZ", tocGzipOffset))
+ // 5 byte flate header
+ // 8 byte gzip footer (two little endian uint32s: digest, size)
+ legacyFooterSize = 47
+
+ // TOCJSONDigestAnnotation is an annotation for an image layer. This stores the
+ // digest of the TOC JSON.
+ // This annotation is valid only when it is specified in `.[]layers.annotations`
+ // of an image manifest.
+ TOCJSONDigestAnnotation = "containerd.io/snapshot/stargz/toc.digest"
+
+ // StoreUncompressedSizeAnnotation is an additional annotation key for eStargz to enable lazy
+ // pulling on containers/storage. Stargz Store is required to expose the layer's uncompressed size
+ // to the runtime but current OCI image doesn't ship this information by default. So we store this
+ // to the special annotation.
+ StoreUncompressedSizeAnnotation = "io.containers.estargz.uncompressed-size"
+
+ // PrefetchLandmark is a file entry which indicates the end position of
+ // prefetch in the stargz file.
+ PrefetchLandmark = ".prefetch.landmark"
+
+ // NoPrefetchLandmark is a file entry which indicates that no prefetch should
+ // occur in the stargz file.
+ NoPrefetchLandmark = ".no.prefetch.landmark"
+
+ landmarkContents = 0xf
+)
+
+// JTOC is the JSON-serialized table of contents index of the files in the stargz file.
+type JTOC struct {
+ Version int `json:"version"`
+ Entries []*TOCEntry `json:"entries"`
+}
+
+// TOCEntry is an entry in the stargz file's TOC (Table of Contents).
+type TOCEntry struct {
+ // Name is the tar entry's name. It is the complete path
+ // stored in the tar file, not just the base name.
+ Name string `json:"name"`
+
+ // Type is one of "dir", "reg", "symlink", "hardlink", "char",
+ // "block", "fifo", or "chunk".
+ // The "chunk" type is used for regular file data chunks past the first
+ // TOCEntry; the 2nd chunk and on have only Type ("chunk"), Offset,
+ // ChunkOffset, and ChunkSize populated.
+ Type string `json:"type"`
+
+ // Size, for regular files, is the logical size of the file.
+ Size int64 `json:"size,omitempty"`
+
+ // ModTime3339 is the modification time of the tar entry. Empty
+ // means zero or unknown. Otherwise it's in UTC RFC3339
+ // format. Use the ModTime method to access the time.Time value.
+ ModTime3339 string `json:"modtime,omitempty"`
+ modTime time.Time
+
+ // LinkName, for symlinks and hardlinks, is the link target.
+ LinkName string `json:"linkName,omitempty"`
+
+ // Mode is the permission and mode bits.
+ Mode int64 `json:"mode,omitempty"`
+
+ // UID is the user ID of the owner.
+ UID int `json:"uid,omitempty"`
+
+ // GID is the group ID of the owner.
+ GID int `json:"gid,omitempty"`
+
+ // Uname is the username of the owner.
+ //
+ // In the serialized JSON, this field may only be present for
+ // the first entry with the same UID.
+ Uname string `json:"userName,omitempty"`
+
+ // Gname is the group name of the owner.
+ //
+ // In the serialized JSON, this field may only be present for
+ // the first entry with the same GID.
+ Gname string `json:"groupName,omitempty"`
+
+ // Offset, for regular files, provides the offset in the
+ // stargz file to the file's data bytes. See ChunkOffset and
+ // ChunkSize.
+ Offset int64 `json:"offset,omitempty"`
+
+ // InnerOffset is an optional field indicates uncompressed offset
+ // of this "reg" or "chunk" payload in a stream starts from Offset.
+ // This field enables to put multiple "reg" or "chunk" payloads
+ // in one chunk with having the same Offset but different InnerOffset.
+ InnerOffset int64 `json:"innerOffset,omitempty"`
+
+ nextOffset int64 // the Offset of the next entry with a non-zero Offset
+
+ // DevMajor is the major device number for "char" and "block" types.
+ DevMajor int `json:"devMajor,omitempty"`
+
+ // DevMinor is the major device number for "char" and "block" types.
+ DevMinor int `json:"devMinor,omitempty"`
+
+ // NumLink is the number of entry names pointing to this entry.
+ // Zero means one name references this entry.
+ // This field is calculated during runtime and not recorded in TOC JSON.
+ NumLink int `json:"-"`
+
+ // Xattrs are the extended attribute for the entry.
+ Xattrs map[string][]byte `json:"xattrs,omitempty"`
+
+ // Digest stores the OCI checksum for regular files payload.
+ // It has the form "sha256:abcdef01234....".
+ Digest string `json:"digest,omitempty"`
+
+ // ChunkOffset is non-zero if this is a chunk of a large,
+ // regular file. If so, the Offset is where the gzip header of
+ // ChunkSize bytes at ChunkOffset in Name begin.
+ //
+ // In serialized form, a "chunkSize" JSON field of zero means
+ // that the chunk goes to the end of the file. After reading
+ // from the stargz TOC, though, the ChunkSize is initialized
+ // to a non-zero file for when Type is either "reg" or
+ // "chunk".
+ ChunkOffset int64 `json:"chunkOffset,omitempty"`
+ ChunkSize int64 `json:"chunkSize,omitempty"`
+
+ // ChunkDigest stores an OCI digest of the chunk. This must be formed
+ // as "sha256:0123abcd...".
+ ChunkDigest string `json:"chunkDigest,omitempty"`
+
+ children map[string]*TOCEntry
+
+ // chunkTopIndex is index of the entry where Offset starts in the blob.
+ chunkTopIndex int
+}
+
+// ModTime returns the entry's modification time.
+func (e *TOCEntry) ModTime() time.Time { return e.modTime }
+
+// NextOffset returns the position (relative to the start of the
+// stargz file) of the next gzip boundary after e.Offset.
+func (e *TOCEntry) NextOffset() int64 { return e.nextOffset }
+
+func (e *TOCEntry) addChild(baseName string, child *TOCEntry) {
+ if e.children == nil {
+ e.children = make(map[string]*TOCEntry)
+ }
+ if child.Type == "dir" {
+ e.NumLink++ // Entry ".." in the subdirectory links to this directory
+ }
+ e.children[baseName] = child
+}
+
+// isDataType reports whether TOCEntry is a regular file or chunk (something that
+// contains regular file data).
+func (e *TOCEntry) isDataType() bool { return e.Type == "reg" || e.Type == "chunk" }
+
+// Stat returns a FileInfo value representing e.
+func (e *TOCEntry) Stat() os.FileInfo { return fileInfo{e} }
+
+// ForeachChild calls f for each child item. If f returns false, iteration ends.
+// If e is not a directory, f is not called.
+func (e *TOCEntry) ForeachChild(f func(baseName string, ent *TOCEntry) bool) {
+ for name, ent := range e.children {
+ if !f(name, ent) {
+ return
+ }
+ }
+}
+
+// LookupChild returns the directory e's child by its base name.
+func (e *TOCEntry) LookupChild(baseName string) (child *TOCEntry, ok bool) {
+ child, ok = e.children[baseName]
+ return
+}
+
+// fileInfo implements os.FileInfo using the wrapped *TOCEntry.
+type fileInfo struct{ e *TOCEntry }
+
+var _ os.FileInfo = fileInfo{}
+
+func (fi fileInfo) Name() string { return path.Base(fi.e.Name) }
+func (fi fileInfo) IsDir() bool { return fi.e.Type == "dir" }
+func (fi fileInfo) Size() int64 { return fi.e.Size }
+func (fi fileInfo) ModTime() time.Time { return fi.e.ModTime() }
+func (fi fileInfo) Sys() interface{} { return fi.e }
+func (fi fileInfo) Mode() (m os.FileMode) {
+ // TOCEntry.Mode is tar.Header.Mode so we can understand the these bits using `tar` pkg.
+ m = (&tar.Header{Mode: fi.e.Mode}).FileInfo().Mode() &
+ (os.ModePerm | os.ModeSetuid | os.ModeSetgid | os.ModeSticky)
+ switch fi.e.Type {
+ case "dir":
+ m |= os.ModeDir
+ case "symlink":
+ m |= os.ModeSymlink
+ case "char":
+ m |= os.ModeDevice | os.ModeCharDevice
+ case "block":
+ m |= os.ModeDevice
+ case "fifo":
+ m |= os.ModeNamedPipe
+ }
+ return m
+}
+
+// TOCEntryVerifier holds verifiers that are usable for verifying chunks contained
+// in a eStargz blob.
+type TOCEntryVerifier interface {
+
+ // Verifier provides a content verifier that can be used for verifying the
+ // contents of the specified TOCEntry.
+ Verifier(ce *TOCEntry) (digest.Verifier, error)
+}
+
+// Compression provides the compression helper to be used creating and parsing eStargz.
+// This package provides gzip-based Compression by default, but any compression
+// algorithm (e.g. zstd) can be used as long as it implements Compression.
+type Compression interface {
+ Compressor
+ Decompressor
+}
+
+// Compressor represents the helper mothods to be used for creating eStargz.
+type Compressor interface {
+ // Writer returns WriteCloser to be used for writing a chunk to eStargz.
+ // Everytime a chunk is written, the WriteCloser is closed and Writer is
+ // called again for writing the next chunk.
+ //
+ // The returned writer should implement "Flush() error" function that flushes
+ // any pending compressed data to the underlying writer.
+ Writer(w io.Writer) (WriteFlushCloser, error)
+
+ // WriteTOCAndFooter is called to write JTOC to the passed Writer.
+ // diffHash calculates the DiffID (uncompressed sha256 hash) of the blob
+ // WriteTOCAndFooter can optionally write anything that affects DiffID calculation
+ // (e.g. uncompressed TOC JSON).
+ //
+ // This function returns tocDgst that represents the digest of TOC that will be used
+ // to verify this blob when it's parsed.
+ WriteTOCAndFooter(w io.Writer, off int64, toc *JTOC, diffHash hash.Hash) (tocDgst digest.Digest, err error)
+}
+
+// Decompressor represents the helper mothods to be used for parsing eStargz.
+type Decompressor interface {
+ // Reader returns ReadCloser to be used for decompressing file payload.
+ Reader(r io.Reader) (io.ReadCloser, error)
+
+ // FooterSize returns the size of the footer of this blob.
+ FooterSize() int64
+
+ // ParseFooter parses the footer and returns the offset and (compressed) size of TOC.
+ // payloadBlobSize is the (compressed) size of the blob payload (i.e. the size between
+ // the top until the TOC JSON).
+ //
+ // If tocOffset < 0, we assume that TOC isn't contained in the blob and pass nil reader
+ // to ParseTOC. We expect that ParseTOC acquire TOC from the external location and return it.
+ //
+ // tocSize is optional. If tocSize <= 0, it's by default the size of the range from tocOffset until the beginning of the
+ // footer (blob size - tocOff - FooterSize).
+ // If blobPayloadSize < 0, blobPayloadSize become the blob size.
+ ParseFooter(p []byte) (blobPayloadSize, tocOffset, tocSize int64, err error)
+
+ // ParseTOC parses TOC from the passed reader. The reader provides the partial contents
+ // of the underlying blob that has the range specified by ParseFooter method.
+ //
+ // This function returns tocDgst that represents the digest of TOC that will be used
+ // to verify this blob. This must match to the value returned from
+ // Compressor.WriteTOCAndFooter that is used when creating this blob.
+ //
+ // If tocOffset returned by ParseFooter is < 0, we assume that TOC isn't contained in the blob.
+ // Pass nil reader to ParseTOC then we expect that ParseTOC acquire TOC from the external location
+ // and return it.
+ ParseTOC(r io.Reader) (toc *JTOC, tocDgst digest.Digest, err error)
+}
+
+type WriteFlushCloser interface {
+ io.WriteCloser
+ Flush() error
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/dbus.go b/vendor/github.com/coreos/go-systemd/v22/dbus/dbus.go
new file mode 100644
index 000000000000..cff5af1a64c3
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/dbus.go
@@ -0,0 +1,261 @@
+// Copyright 2015 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Integration with the systemd D-Bus API. See http://www.freedesktop.org/wiki/Software/systemd/dbus/
+package dbus
+
+import (
+ "context"
+ "encoding/hex"
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+ "sync"
+
+ "github.com/godbus/dbus/v5"
+)
+
+const (
+ alpha = `abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ`
+ num = `0123456789`
+ alphanum = alpha + num
+ signalBuffer = 100
+)
+
+// needsEscape checks whether a byte in a potential dbus ObjectPath needs to be escaped
+func needsEscape(i int, b byte) bool {
+ // Escape everything that is not a-z-A-Z-0-9
+ // Also escape 0-9 if it's the first character
+ return strings.IndexByte(alphanum, b) == -1 ||
+ (i == 0 && strings.IndexByte(num, b) != -1)
+}
+
+// PathBusEscape sanitizes a constituent string of a dbus ObjectPath using the
+// rules that systemd uses for serializing special characters.
+func PathBusEscape(path string) string {
+ // Special case the empty string
+ if len(path) == 0 {
+ return "_"
+ }
+ n := []byte{}
+ for i := 0; i < len(path); i++ {
+ c := path[i]
+ if needsEscape(i, c) {
+ e := fmt.Sprintf("_%x", c)
+ n = append(n, []byte(e)...)
+ } else {
+ n = append(n, c)
+ }
+ }
+ return string(n)
+}
+
+// pathBusUnescape is the inverse of PathBusEscape.
+func pathBusUnescape(path string) string {
+ if path == "_" {
+ return ""
+ }
+ n := []byte{}
+ for i := 0; i < len(path); i++ {
+ c := path[i]
+ if c == '_' && i+2 < len(path) {
+ res, err := hex.DecodeString(path[i+1 : i+3])
+ if err == nil {
+ n = append(n, res...)
+ }
+ i += 2
+ } else {
+ n = append(n, c)
+ }
+ }
+ return string(n)
+}
+
+// Conn is a connection to systemd's dbus endpoint.
+type Conn struct {
+ // sysconn/sysobj are only used to call dbus methods
+ sysconn *dbus.Conn
+ sysobj dbus.BusObject
+
+ // sigconn/sigobj are only used to receive dbus signals
+ sigconn *dbus.Conn
+ sigobj dbus.BusObject
+
+ jobListener struct {
+ jobs map[dbus.ObjectPath]chan<- string
+ sync.Mutex
+ }
+ subStateSubscriber struct {
+ updateCh chan<- *SubStateUpdate
+ errCh chan<- error
+ sync.Mutex
+ ignore map[dbus.ObjectPath]int64
+ cleanIgnore int64
+ }
+ propertiesSubscriber struct {
+ updateCh chan<- *PropertiesUpdate
+ errCh chan<- error
+ sync.Mutex
+ }
+}
+
+// Deprecated: use NewWithContext instead.
+func New() (*Conn, error) {
+ return NewWithContext(context.Background())
+}
+
+// NewWithContext establishes a connection to any available bus and authenticates.
+// Callers should call Close() when done with the connection.
+func NewWithContext(ctx context.Context) (*Conn, error) {
+ conn, err := NewSystemConnectionContext(ctx)
+ if err != nil && os.Geteuid() == 0 {
+ return NewSystemdConnectionContext(ctx)
+ }
+ return conn, err
+}
+
+// Deprecated: use NewSystemConnectionContext instead.
+func NewSystemConnection() (*Conn, error) {
+ return NewSystemConnectionContext(context.Background())
+}
+
+// NewSystemConnectionContext establishes a connection to the system bus and authenticates.
+// Callers should call Close() when done with the connection.
+func NewSystemConnectionContext(ctx context.Context) (*Conn, error) {
+ return NewConnection(func() (*dbus.Conn, error) {
+ return dbusAuthHelloConnection(ctx, dbus.SystemBusPrivate)
+ })
+}
+
+// Deprecated: use NewUserConnectionContext instead.
+func NewUserConnection() (*Conn, error) {
+ return NewUserConnectionContext(context.Background())
+}
+
+// NewUserConnectionContext establishes a connection to the session bus and
+// authenticates. This can be used to connect to systemd user instances.
+// Callers should call Close() when done with the connection.
+func NewUserConnectionContext(ctx context.Context) (*Conn, error) {
+ return NewConnection(func() (*dbus.Conn, error) {
+ return dbusAuthHelloConnection(ctx, dbus.SessionBusPrivate)
+ })
+}
+
+// Deprecated: use NewSystemdConnectionContext instead.
+func NewSystemdConnection() (*Conn, error) {
+ return NewSystemdConnectionContext(context.Background())
+}
+
+// NewSystemdConnectionContext establishes a private, direct connection to systemd.
+// This can be used for communicating with systemd without a dbus daemon.
+// Callers should call Close() when done with the connection.
+func NewSystemdConnectionContext(ctx context.Context) (*Conn, error) {
+ return NewConnection(func() (*dbus.Conn, error) {
+ // We skip Hello when talking directly to systemd.
+ return dbusAuthConnection(ctx, func(opts ...dbus.ConnOption) (*dbus.Conn, error) {
+ return dbus.Dial("unix:path=/run/systemd/private", opts...)
+ })
+ })
+}
+
+// Close closes an established connection.
+func (c *Conn) Close() {
+ c.sysconn.Close()
+ c.sigconn.Close()
+}
+
+// NewConnection establishes a connection to a bus using a caller-supplied function.
+// This allows connecting to remote buses through a user-supplied mechanism.
+// The supplied function may be called multiple times, and should return independent connections.
+// The returned connection must be fully initialised: the org.freedesktop.DBus.Hello call must have succeeded,
+// and any authentication should be handled by the function.
+func NewConnection(dialBus func() (*dbus.Conn, error)) (*Conn, error) {
+ sysconn, err := dialBus()
+ if err != nil {
+ return nil, err
+ }
+
+ sigconn, err := dialBus()
+ if err != nil {
+ sysconn.Close()
+ return nil, err
+ }
+
+ c := &Conn{
+ sysconn: sysconn,
+ sysobj: systemdObject(sysconn),
+ sigconn: sigconn,
+ sigobj: systemdObject(sigconn),
+ }
+
+ c.subStateSubscriber.ignore = make(map[dbus.ObjectPath]int64)
+ c.jobListener.jobs = make(map[dbus.ObjectPath]chan<- string)
+
+ // Setup the listeners on jobs so that we can get completions
+ c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
+ "type='signal', interface='org.freedesktop.systemd1.Manager', member='JobRemoved'")
+
+ c.dispatch()
+ return c, nil
+}
+
+// GetManagerProperty returns the value of a property on the org.freedesktop.systemd1.Manager
+// interface. The value is returned in its string representation, as defined at
+// https://developer.gnome.org/glib/unstable/gvariant-text.html.
+func (c *Conn) GetManagerProperty(prop string) (string, error) {
+ variant, err := c.sysobj.GetProperty("org.freedesktop.systemd1.Manager." + prop)
+ if err != nil {
+ return "", err
+ }
+ return variant.String(), nil
+}
+
+func dbusAuthConnection(ctx context.Context, createBus func(opts ...dbus.ConnOption) (*dbus.Conn, error)) (*dbus.Conn, error) {
+ conn, err := createBus(dbus.WithContext(ctx))
+ if err != nil {
+ return nil, err
+ }
+
+ // Only use EXTERNAL method, and hardcode the uid (not username)
+ // to avoid a username lookup (which requires a dynamically linked
+ // libc)
+ methods := []dbus.Auth{dbus.AuthExternal(strconv.Itoa(os.Getuid()))}
+
+ err = conn.Auth(methods)
+ if err != nil {
+ conn.Close()
+ return nil, err
+ }
+
+ return conn, nil
+}
+
+func dbusAuthHelloConnection(ctx context.Context, createBus func(opts ...dbus.ConnOption) (*dbus.Conn, error)) (*dbus.Conn, error) {
+ conn, err := dbusAuthConnection(ctx, createBus)
+ if err != nil {
+ return nil, err
+ }
+
+ if err = conn.Hello(); err != nil {
+ conn.Close()
+ return nil, err
+ }
+
+ return conn, nil
+}
+
+func systemdObject(conn *dbus.Conn) dbus.BusObject {
+ return conn.Object("org.freedesktop.systemd1", dbus.ObjectPath("/org/freedesktop/systemd1"))
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/methods.go b/vendor/github.com/coreos/go-systemd/v22/dbus/methods.go
new file mode 100644
index 000000000000..fa04afc708e7
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/methods.go
@@ -0,0 +1,830 @@
+// Copyright 2015, 2018 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "path"
+ "strconv"
+
+ "github.com/godbus/dbus/v5"
+)
+
+// Who can be used to specify which process to kill in the unit via the KillUnitWithTarget API
+type Who string
+
+const (
+ // All sends the signal to all processes in the unit
+ All Who = "all"
+ // Main sends the signal to the main process of the unit
+ Main Who = "main"
+ // Control sends the signal to the control process of the unit
+ Control Who = "control"
+)
+
+func (c *Conn) jobComplete(signal *dbus.Signal) {
+ var id uint32
+ var job dbus.ObjectPath
+ var unit string
+ var result string
+ dbus.Store(signal.Body, &id, &job, &unit, &result)
+ c.jobListener.Lock()
+ out, ok := c.jobListener.jobs[job]
+ if ok {
+ out <- result
+ delete(c.jobListener.jobs, job)
+ }
+ c.jobListener.Unlock()
+}
+
+func (c *Conn) startJob(ctx context.Context, ch chan<- string, job string, args ...interface{}) (int, error) {
+ if ch != nil {
+ c.jobListener.Lock()
+ defer c.jobListener.Unlock()
+ }
+
+ var p dbus.ObjectPath
+ err := c.sysobj.CallWithContext(ctx, job, 0, args...).Store(&p)
+ if err != nil {
+ return 0, err
+ }
+
+ if ch != nil {
+ c.jobListener.jobs[p] = ch
+ }
+
+ // ignore error since 0 is fine if conversion fails
+ jobID, _ := strconv.Atoi(path.Base(string(p)))
+
+ return jobID, nil
+}
+
+// Deprecated: use StartUnitContext instead.
+func (c *Conn) StartUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.StartUnitContext(context.Background(), name, mode, ch)
+}
+
+// StartUnitContext enqueues a start job and depending jobs, if any (unless otherwise
+// specified by the mode string).
+//
+// Takes the unit to activate, plus a mode string. The mode needs to be one of
+// replace, fail, isolate, ignore-dependencies, ignore-requirements. If
+// "replace" the call will start the unit and its dependencies, possibly
+// replacing already queued jobs that conflict with this. If "fail" the call
+// will start the unit and its dependencies, but will fail if this would change
+// an already queued job. If "isolate" the call will start the unit in question
+// and terminate all units that aren't dependencies of it. If
+// "ignore-dependencies" it will start a unit but ignore all its dependencies.
+// If "ignore-requirements" it will start a unit but only ignore the
+// requirement dependencies. It is not recommended to make use of the latter
+// two options.
+//
+// If the provided channel is non-nil, a result string will be sent to it upon
+// job completion: one of done, canceled, timeout, failed, dependency, skipped.
+// done indicates successful execution of a job. canceled indicates that a job
+// has been canceled before it finished execution. timeout indicates that the
+// job timeout was reached. failed indicates that the job failed. dependency
+// indicates that a job this job has been depending on failed and the job hence
+// has been removed too. skipped indicates that a job was skipped because it
+// didn't apply to the units current state.
+//
+// If no error occurs, the ID of the underlying systemd job will be returned. There
+// does exist the possibility for no error to be returned, but for the returned job
+// ID to be 0. In this case, the actual underlying ID is not 0 and this datapoint
+// should not be considered authoritative.
+//
+// If an error does occur, it will be returned to the user alongside a job ID of 0.
+func (c *Conn) StartUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.StartUnit", name, mode)
+}
+
+// Deprecated: use StopUnitContext instead.
+func (c *Conn) StopUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.StopUnitContext(context.Background(), name, mode, ch)
+}
+
+// StopUnitContext is similar to StartUnitContext, but stops the specified unit
+// rather than starting it.
+func (c *Conn) StopUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.StopUnit", name, mode)
+}
+
+// Deprecated: use ReloadUnitContext instead.
+func (c *Conn) ReloadUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.ReloadUnitContext(context.Background(), name, mode, ch)
+}
+
+// ReloadUnitContext reloads a unit. Reloading is done only if the unit
+// is already running, and fails otherwise.
+func (c *Conn) ReloadUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.ReloadUnit", name, mode)
+}
+
+// Deprecated: use RestartUnitContext instead.
+func (c *Conn) RestartUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.RestartUnitContext(context.Background(), name, mode, ch)
+}
+
+// RestartUnitContext restarts a service. If a service is restarted that isn't
+// running it will be started.
+func (c *Conn) RestartUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.RestartUnit", name, mode)
+}
+
+// Deprecated: use TryRestartUnitContext instead.
+func (c *Conn) TryRestartUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.TryRestartUnitContext(context.Background(), name, mode, ch)
+}
+
+// TryRestartUnitContext is like RestartUnitContext, except that a service that
+// isn't running is not affected by the restart.
+func (c *Conn) TryRestartUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.TryRestartUnit", name, mode)
+}
+
+// Deprecated: use ReloadOrRestartUnitContext instead.
+func (c *Conn) ReloadOrRestartUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.ReloadOrRestartUnitContext(context.Background(), name, mode, ch)
+}
+
+// ReloadOrRestartUnitContext attempts a reload if the unit supports it and use
+// a restart otherwise.
+func (c *Conn) ReloadOrRestartUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.ReloadOrRestartUnit", name, mode)
+}
+
+// Deprecated: use ReloadOrTryRestartUnitContext instead.
+func (c *Conn) ReloadOrTryRestartUnit(name string, mode string, ch chan<- string) (int, error) {
+ return c.ReloadOrTryRestartUnitContext(context.Background(), name, mode, ch)
+}
+
+// ReloadOrTryRestartUnitContext attempts a reload if the unit supports it,
+// and use a "Try" flavored restart otherwise.
+func (c *Conn) ReloadOrTryRestartUnitContext(ctx context.Context, name string, mode string, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.ReloadOrTryRestartUnit", name, mode)
+}
+
+// Deprecated: use StartTransientUnitContext instead.
+func (c *Conn) StartTransientUnit(name string, mode string, properties []Property, ch chan<- string) (int, error) {
+ return c.StartTransientUnitContext(context.Background(), name, mode, properties, ch)
+}
+
+// StartTransientUnitContext may be used to create and start a transient unit, which
+// will be released as soon as it is not running or referenced anymore or the
+// system is rebooted. name is the unit name including suffix, and must be
+// unique. mode is the same as in StartUnitContext, properties contains properties
+// of the unit.
+func (c *Conn) StartTransientUnitContext(ctx context.Context, name string, mode string, properties []Property, ch chan<- string) (int, error) {
+ return c.startJob(ctx, ch, "org.freedesktop.systemd1.Manager.StartTransientUnit", name, mode, properties, make([]PropertyCollection, 0))
+}
+
+// Deprecated: use KillUnitContext instead.
+func (c *Conn) KillUnit(name string, signal int32) {
+ c.KillUnitContext(context.Background(), name, signal)
+}
+
+// KillUnitContext takes the unit name and a UNIX signal number to send.
+// All of the unit's processes are killed.
+func (c *Conn) KillUnitContext(ctx context.Context, name string, signal int32) {
+ c.KillUnitWithTarget(ctx, name, All, signal)
+}
+
+// KillUnitWithTarget is like KillUnitContext, but allows you to specify which
+// process in the unit to send the signal to.
+func (c *Conn) KillUnitWithTarget(ctx context.Context, name string, target Who, signal int32) error {
+ return c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.KillUnit", 0, name, string(target), signal).Store()
+}
+
+// Deprecated: use ResetFailedUnitContext instead.
+func (c *Conn) ResetFailedUnit(name string) error {
+ return c.ResetFailedUnitContext(context.Background(), name)
+}
+
+// ResetFailedUnitContext resets the "failed" state of a specific unit.
+func (c *Conn) ResetFailedUnitContext(ctx context.Context, name string) error {
+ return c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ResetFailedUnit", 0, name).Store()
+}
+
+// Deprecated: use SystemStateContext instead.
+func (c *Conn) SystemState() (*Property, error) {
+ return c.SystemStateContext(context.Background())
+}
+
+// SystemStateContext returns the systemd state. Equivalent to
+// systemctl is-system-running.
+func (c *Conn) SystemStateContext(ctx context.Context) (*Property, error) {
+ var err error
+ var prop dbus.Variant
+
+ obj := c.sysconn.Object("org.freedesktop.systemd1", "/org/freedesktop/systemd1")
+ err = obj.CallWithContext(ctx, "org.freedesktop.DBus.Properties.Get", 0, "org.freedesktop.systemd1.Manager", "SystemState").Store(&prop)
+ if err != nil {
+ return nil, err
+ }
+
+ return &Property{Name: "SystemState", Value: prop}, nil
+}
+
+// getProperties takes the unit path and returns all of its dbus object properties, for the given dbus interface.
+func (c *Conn) getProperties(ctx context.Context, path dbus.ObjectPath, dbusInterface string) (map[string]interface{}, error) {
+ var err error
+ var props map[string]dbus.Variant
+
+ if !path.IsValid() {
+ return nil, fmt.Errorf("invalid unit name: %v", path)
+ }
+
+ obj := c.sysconn.Object("org.freedesktop.systemd1", path)
+ err = obj.CallWithContext(ctx, "org.freedesktop.DBus.Properties.GetAll", 0, dbusInterface).Store(&props)
+ if err != nil {
+ return nil, err
+ }
+
+ out := make(map[string]interface{}, len(props))
+ for k, v := range props {
+ out[k] = v.Value()
+ }
+
+ return out, nil
+}
+
+// Deprecated: use GetUnitPropertiesContext instead.
+func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) {
+ return c.GetUnitPropertiesContext(context.Background(), unit)
+}
+
+// GetUnitPropertiesContext takes the (unescaped) unit name and returns all of
+// its dbus object properties.
+func (c *Conn) GetUnitPropertiesContext(ctx context.Context, unit string) (map[string]interface{}, error) {
+ path := unitPath(unit)
+ return c.getProperties(ctx, path, "org.freedesktop.systemd1.Unit")
+}
+
+// Deprecated: use GetUnitPathPropertiesContext instead.
+func (c *Conn) GetUnitPathProperties(path dbus.ObjectPath) (map[string]interface{}, error) {
+ return c.GetUnitPathPropertiesContext(context.Background(), path)
+}
+
+// GetUnitPathPropertiesContext takes the (escaped) unit path and returns all
+// of its dbus object properties.
+func (c *Conn) GetUnitPathPropertiesContext(ctx context.Context, path dbus.ObjectPath) (map[string]interface{}, error) {
+ return c.getProperties(ctx, path, "org.freedesktop.systemd1.Unit")
+}
+
+// Deprecated: use GetAllPropertiesContext instead.
+func (c *Conn) GetAllProperties(unit string) (map[string]interface{}, error) {
+ return c.GetAllPropertiesContext(context.Background(), unit)
+}
+
+// GetAllPropertiesContext takes the (unescaped) unit name and returns all of
+// its dbus object properties.
+func (c *Conn) GetAllPropertiesContext(ctx context.Context, unit string) (map[string]interface{}, error) {
+ path := unitPath(unit)
+ return c.getProperties(ctx, path, "")
+}
+
+func (c *Conn) getProperty(ctx context.Context, unit string, dbusInterface string, propertyName string) (*Property, error) {
+ var err error
+ var prop dbus.Variant
+
+ path := unitPath(unit)
+ if !path.IsValid() {
+ return nil, errors.New("invalid unit name: " + unit)
+ }
+
+ obj := c.sysconn.Object("org.freedesktop.systemd1", path)
+ err = obj.CallWithContext(ctx, "org.freedesktop.DBus.Properties.Get", 0, dbusInterface, propertyName).Store(&prop)
+ if err != nil {
+ return nil, err
+ }
+
+ return &Property{Name: propertyName, Value: prop}, nil
+}
+
+// Deprecated: use GetUnitPropertyContext instead.
+func (c *Conn) GetUnitProperty(unit string, propertyName string) (*Property, error) {
+ return c.GetUnitPropertyContext(context.Background(), unit, propertyName)
+}
+
+// GetUnitPropertyContext takes an (unescaped) unit name, and a property name,
+// and returns the property value.
+func (c *Conn) GetUnitPropertyContext(ctx context.Context, unit string, propertyName string) (*Property, error) {
+ return c.getProperty(ctx, unit, "org.freedesktop.systemd1.Unit", propertyName)
+}
+
+// Deprecated: use GetServicePropertyContext instead.
+func (c *Conn) GetServiceProperty(service string, propertyName string) (*Property, error) {
+ return c.GetServicePropertyContext(context.Background(), service, propertyName)
+}
+
+// GetServiceProperty returns property for given service name and property name.
+func (c *Conn) GetServicePropertyContext(ctx context.Context, service string, propertyName string) (*Property, error) {
+ return c.getProperty(ctx, service, "org.freedesktop.systemd1.Service", propertyName)
+}
+
+// Deprecated: use GetUnitTypePropertiesContext instead.
+func (c *Conn) GetUnitTypeProperties(unit string, unitType string) (map[string]interface{}, error) {
+ return c.GetUnitTypePropertiesContext(context.Background(), unit, unitType)
+}
+
+// GetUnitTypePropertiesContext returns the extra properties for a unit, specific to the unit type.
+// Valid values for unitType: Service, Socket, Target, Device, Mount, Automount, Snapshot, Timer, Swap, Path, Slice, Scope.
+// Returns "dbus.Error: Unknown interface" error if the unitType is not the correct type of the unit.
+func (c *Conn) GetUnitTypePropertiesContext(ctx context.Context, unit string, unitType string) (map[string]interface{}, error) {
+ path := unitPath(unit)
+ return c.getProperties(ctx, path, "org.freedesktop.systemd1."+unitType)
+}
+
+// Deprecated: use SetUnitPropertiesContext instead.
+func (c *Conn) SetUnitProperties(name string, runtime bool, properties ...Property) error {
+ return c.SetUnitPropertiesContext(context.Background(), name, runtime, properties...)
+}
+
+// SetUnitPropertiesContext may be used to modify certain unit properties at runtime.
+// Not all properties may be changed at runtime, but many resource management
+// settings (primarily those in systemd.cgroup(5)) may. The changes are applied
+// instantly, and stored on disk for future boots, unless runtime is true, in which
+// case the settings only apply until the next reboot. name is the name of the unit
+// to modify. properties are the settings to set, encoded as an array of property
+// name and value pairs.
+func (c *Conn) SetUnitPropertiesContext(ctx context.Context, name string, runtime bool, properties ...Property) error {
+ return c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.SetUnitProperties", 0, name, runtime, properties).Store()
+}
+
+// Deprecated: use GetUnitTypePropertyContext instead.
+func (c *Conn) GetUnitTypeProperty(unit string, unitType string, propertyName string) (*Property, error) {
+ return c.GetUnitTypePropertyContext(context.Background(), unit, unitType, propertyName)
+}
+
+// GetUnitTypePropertyContext takes a property name, a unit name, and a unit type,
+// and returns a property value. For valid values of unitType, see GetUnitTypePropertiesContext.
+func (c *Conn) GetUnitTypePropertyContext(ctx context.Context, unit string, unitType string, propertyName string) (*Property, error) {
+ return c.getProperty(ctx, unit, "org.freedesktop.systemd1."+unitType, propertyName)
+}
+
+type UnitStatus struct {
+ Name string // The primary unit name as string
+ Description string // The human readable description string
+ LoadState string // The load state (i.e. whether the unit file has been loaded successfully)
+ ActiveState string // The active state (i.e. whether the unit is currently started or not)
+ SubState string // The sub state (a more fine-grained version of the active state that is specific to the unit type, which the active state is not)
+ Followed string // A unit that is being followed in its state by this unit, if there is any, otherwise the empty string.
+ Path dbus.ObjectPath // The unit object path
+ JobId uint32 // If there is a job queued for the job unit the numeric job id, 0 otherwise
+ JobType string // The job type as string
+ JobPath dbus.ObjectPath // The job object path
+}
+
+type storeFunc func(retvalues ...interface{}) error
+
+func (c *Conn) listUnitsInternal(f storeFunc) ([]UnitStatus, error) {
+ result := make([][]interface{}, 0)
+ err := f(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ status := make([]UnitStatus, len(result))
+ statusInterface := make([]interface{}, len(status))
+ for i := range status {
+ statusInterface[i] = &status[i]
+ }
+
+ err = dbus.Store(resultInterface, statusInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return status, nil
+}
+
+// Deprecated: use ListUnitsContext instead.
+func (c *Conn) ListUnits() ([]UnitStatus, error) {
+ return c.ListUnitsContext(context.Background())
+}
+
+// ListUnitsContext returns an array with all currently loaded units. Note that
+// units may be known by multiple names at the same time, and hence there might
+// be more unit names loaded than actual units behind them.
+// Also note that a unit is only loaded if it is active and/or enabled.
+// Units that are both disabled and inactive will thus not be returned.
+func (c *Conn) ListUnitsContext(ctx context.Context) ([]UnitStatus, error) {
+ return c.listUnitsInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnits", 0).Store)
+}
+
+// Deprecated: use ListUnitsFilteredContext instead.
+func (c *Conn) ListUnitsFiltered(states []string) ([]UnitStatus, error) {
+ return c.ListUnitsFilteredContext(context.Background(), states)
+}
+
+// ListUnitsFilteredContext returns an array with units filtered by state.
+// It takes a list of units' statuses to filter.
+func (c *Conn) ListUnitsFilteredContext(ctx context.Context, states []string) ([]UnitStatus, error) {
+ return c.listUnitsInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnitsFiltered", 0, states).Store)
+}
+
+// Deprecated: use ListUnitsByPatternsContext instead.
+func (c *Conn) ListUnitsByPatterns(states []string, patterns []string) ([]UnitStatus, error) {
+ return c.ListUnitsByPatternsContext(context.Background(), states, patterns)
+}
+
+// ListUnitsByPatternsContext returns an array with units.
+// It takes a list of units' statuses and names to filter.
+// Note that units may be known by multiple names at the same time,
+// and hence there might be more unit names loaded than actual units behind them.
+func (c *Conn) ListUnitsByPatternsContext(ctx context.Context, states []string, patterns []string) ([]UnitStatus, error) {
+ return c.listUnitsInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnitsByPatterns", 0, states, patterns).Store)
+}
+
+// Deprecated: use ListUnitsByNamesContext instead.
+func (c *Conn) ListUnitsByNames(units []string) ([]UnitStatus, error) {
+ return c.ListUnitsByNamesContext(context.Background(), units)
+}
+
+// ListUnitsByNamesContext returns an array with units. It takes a list of units'
+// names and returns an UnitStatus array. Comparing to ListUnitsByPatternsContext
+// method, this method returns statuses even for inactive or non-existing
+// units. Input array should contain exact unit names, but not patterns.
+//
+// Requires systemd v230 or higher.
+func (c *Conn) ListUnitsByNamesContext(ctx context.Context, units []string) ([]UnitStatus, error) {
+ return c.listUnitsInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnitsByNames", 0, units).Store)
+}
+
+type UnitFile struct {
+ Path string
+ Type string
+}
+
+func (c *Conn) listUnitFilesInternal(f storeFunc) ([]UnitFile, error) {
+ result := make([][]interface{}, 0)
+ err := f(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ files := make([]UnitFile, len(result))
+ fileInterface := make([]interface{}, len(files))
+ for i := range files {
+ fileInterface[i] = &files[i]
+ }
+
+ err = dbus.Store(resultInterface, fileInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return files, nil
+}
+
+// Deprecated: use ListUnitFilesContext instead.
+func (c *Conn) ListUnitFiles() ([]UnitFile, error) {
+ return c.ListUnitFilesContext(context.Background())
+}
+
+// ListUnitFiles returns an array of all available units on disk.
+func (c *Conn) ListUnitFilesContext(ctx context.Context) ([]UnitFile, error) {
+ return c.listUnitFilesInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnitFiles", 0).Store)
+}
+
+// Deprecated: use ListUnitFilesByPatternsContext instead.
+func (c *Conn) ListUnitFilesByPatterns(states []string, patterns []string) ([]UnitFile, error) {
+ return c.ListUnitFilesByPatternsContext(context.Background(), states, patterns)
+}
+
+// ListUnitFilesByPatternsContext returns an array of all available units on disk matched the patterns.
+func (c *Conn) ListUnitFilesByPatternsContext(ctx context.Context, states []string, patterns []string) ([]UnitFile, error) {
+ return c.listUnitFilesInternal(c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListUnitFilesByPatterns", 0, states, patterns).Store)
+}
+
+type LinkUnitFileChange EnableUnitFileChange
+
+// Deprecated: use LinkUnitFilesContext instead.
+func (c *Conn) LinkUnitFiles(files []string, runtime bool, force bool) ([]LinkUnitFileChange, error) {
+ return c.LinkUnitFilesContext(context.Background(), files, runtime, force)
+}
+
+// LinkUnitFilesContext links unit files (that are located outside of the
+// usual unit search paths) into the unit search path.
+//
+// It takes a list of absolute paths to unit files to link and two
+// booleans.
+//
+// The first boolean controls whether the unit shall be
+// enabled for runtime only (true, /run), or persistently (false,
+// /etc).
+//
+// The second controls whether symlinks pointing to other units shall
+// be replaced if necessary.
+//
+// This call returns a list of the changes made. The list consists of
+// structures with three strings: the type of the change (one of symlink
+// or unlink), the file name of the symlink and the destination of the
+// symlink.
+func (c *Conn) LinkUnitFilesContext(ctx context.Context, files []string, runtime bool, force bool) ([]LinkUnitFileChange, error) {
+ result := make([][]interface{}, 0)
+ err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.LinkUnitFiles", 0, files, runtime, force).Store(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ changes := make([]LinkUnitFileChange, len(result))
+ changesInterface := make([]interface{}, len(changes))
+ for i := range changes {
+ changesInterface[i] = &changes[i]
+ }
+
+ err = dbus.Store(resultInterface, changesInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return changes, nil
+}
+
+// Deprecated: use EnableUnitFilesContext instead.
+func (c *Conn) EnableUnitFiles(files []string, runtime bool, force bool) (bool, []EnableUnitFileChange, error) {
+ return c.EnableUnitFilesContext(context.Background(), files, runtime, force)
+}
+
+// EnableUnitFilesContext may be used to enable one or more units in the system
+// (by creating symlinks to them in /etc or /run).
+//
+// It takes a list of unit files to enable (either just file names or full
+// absolute paths if the unit files are residing outside the usual unit
+// search paths), and two booleans: the first controls whether the unit shall
+// be enabled for runtime only (true, /run), or persistently (false, /etc).
+// The second one controls whether symlinks pointing to other units shall
+// be replaced if necessary.
+//
+// This call returns one boolean and an array with the changes made. The
+// boolean signals whether the unit files contained any enablement
+// information (i.e. an [Install]) section. The changes list consists of
+// structures with three strings: the type of the change (one of symlink
+// or unlink), the file name of the symlink and the destination of the
+// symlink.
+func (c *Conn) EnableUnitFilesContext(ctx context.Context, files []string, runtime bool, force bool) (bool, []EnableUnitFileChange, error) {
+ var carries_install_info bool
+
+ result := make([][]interface{}, 0)
+ err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.EnableUnitFiles", 0, files, runtime, force).Store(&carries_install_info, &result)
+ if err != nil {
+ return false, nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ changes := make([]EnableUnitFileChange, len(result))
+ changesInterface := make([]interface{}, len(changes))
+ for i := range changes {
+ changesInterface[i] = &changes[i]
+ }
+
+ err = dbus.Store(resultInterface, changesInterface...)
+ if err != nil {
+ return false, nil, err
+ }
+
+ return carries_install_info, changes, nil
+}
+
+type EnableUnitFileChange struct {
+ Type string // Type of the change (one of symlink or unlink)
+ Filename string // File name of the symlink
+ Destination string // Destination of the symlink
+}
+
+// Deprecated: use DisableUnitFilesContext instead.
+func (c *Conn) DisableUnitFiles(files []string, runtime bool) ([]DisableUnitFileChange, error) {
+ return c.DisableUnitFilesContext(context.Background(), files, runtime)
+}
+
+// DisableUnitFilesContext may be used to disable one or more units in the
+// system (by removing symlinks to them from /etc or /run).
+//
+// It takes a list of unit files to disable (either just file names or full
+// absolute paths if the unit files are residing outside the usual unit
+// search paths), and one boolean: whether the unit was enabled for runtime
+// only (true, /run), or persistently (false, /etc).
+//
+// This call returns an array with the changes made. The changes list
+// consists of structures with three strings: the type of the change (one of
+// symlink or unlink), the file name of the symlink and the destination of the
+// symlink.
+func (c *Conn) DisableUnitFilesContext(ctx context.Context, files []string, runtime bool) ([]DisableUnitFileChange, error) {
+ result := make([][]interface{}, 0)
+ err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.DisableUnitFiles", 0, files, runtime).Store(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ changes := make([]DisableUnitFileChange, len(result))
+ changesInterface := make([]interface{}, len(changes))
+ for i := range changes {
+ changesInterface[i] = &changes[i]
+ }
+
+ err = dbus.Store(resultInterface, changesInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return changes, nil
+}
+
+type DisableUnitFileChange struct {
+ Type string // Type of the change (one of symlink or unlink)
+ Filename string // File name of the symlink
+ Destination string // Destination of the symlink
+}
+
+// Deprecated: use MaskUnitFilesContext instead.
+func (c *Conn) MaskUnitFiles(files []string, runtime bool, force bool) ([]MaskUnitFileChange, error) {
+ return c.MaskUnitFilesContext(context.Background(), files, runtime, force)
+}
+
+// MaskUnitFilesContext masks one or more units in the system.
+//
+// The files argument contains a list of units to mask (either just file names
+// or full absolute paths if the unit files are residing outside the usual unit
+// search paths).
+//
+// The runtime argument is used to specify whether the unit was enabled for
+// runtime only (true, /run/systemd/..), or persistently (false,
+// /etc/systemd/..).
+func (c *Conn) MaskUnitFilesContext(ctx context.Context, files []string, runtime bool, force bool) ([]MaskUnitFileChange, error) {
+ result := make([][]interface{}, 0)
+ err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.MaskUnitFiles", 0, files, runtime, force).Store(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ changes := make([]MaskUnitFileChange, len(result))
+ changesInterface := make([]interface{}, len(changes))
+ for i := range changes {
+ changesInterface[i] = &changes[i]
+ }
+
+ err = dbus.Store(resultInterface, changesInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return changes, nil
+}
+
+type MaskUnitFileChange struct {
+ Type string // Type of the change (one of symlink or unlink)
+ Filename string // File name of the symlink
+ Destination string // Destination of the symlink
+}
+
+// Deprecated: use UnmaskUnitFilesContext instead.
+func (c *Conn) UnmaskUnitFiles(files []string, runtime bool) ([]UnmaskUnitFileChange, error) {
+ return c.UnmaskUnitFilesContext(context.Background(), files, runtime)
+}
+
+// UnmaskUnitFilesContext unmasks one or more units in the system.
+//
+// It takes the list of unit files to mask (either just file names or full
+// absolute paths if the unit files are residing outside the usual unit search
+// paths), and a boolean runtime flag to specify whether the unit was enabled
+// for runtime only (true, /run/systemd/..), or persistently (false,
+// /etc/systemd/..).
+func (c *Conn) UnmaskUnitFilesContext(ctx context.Context, files []string, runtime bool) ([]UnmaskUnitFileChange, error) {
+ result := make([][]interface{}, 0)
+ err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.UnmaskUnitFiles", 0, files, runtime).Store(&result)
+ if err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ changes := make([]UnmaskUnitFileChange, len(result))
+ changesInterface := make([]interface{}, len(changes))
+ for i := range changes {
+ changesInterface[i] = &changes[i]
+ }
+
+ err = dbus.Store(resultInterface, changesInterface...)
+ if err != nil {
+ return nil, err
+ }
+
+ return changes, nil
+}
+
+type UnmaskUnitFileChange struct {
+ Type string // Type of the change (one of symlink or unlink)
+ Filename string // File name of the symlink
+ Destination string // Destination of the symlink
+}
+
+// Deprecated: use ReloadContext instead.
+func (c *Conn) Reload() error {
+ return c.ReloadContext(context.Background())
+}
+
+// ReloadContext instructs systemd to scan for and reload unit files. This is
+// an equivalent to systemctl daemon-reload.
+func (c *Conn) ReloadContext(ctx context.Context) error {
+ return c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.Reload", 0).Store()
+}
+
+func unitPath(name string) dbus.ObjectPath {
+ return dbus.ObjectPath("/org/freedesktop/systemd1/unit/" + PathBusEscape(name))
+}
+
+// unitName returns the unescaped base element of the supplied escaped path.
+func unitName(dpath dbus.ObjectPath) string {
+ return pathBusUnescape(path.Base(string(dpath)))
+}
+
+// JobStatus holds a currently queued job definition.
+type JobStatus struct {
+ Id uint32 // The numeric job id
+ Unit string // The primary unit name for this job
+ JobType string // The job type as string
+ Status string // The job state as string
+ JobPath dbus.ObjectPath // The job object path
+ UnitPath dbus.ObjectPath // The unit object path
+}
+
+// Deprecated: use ListJobsContext instead.
+func (c *Conn) ListJobs() ([]JobStatus, error) {
+ return c.ListJobsContext(context.Background())
+}
+
+// ListJobsContext returns an array with all currently queued jobs.
+func (c *Conn) ListJobsContext(ctx context.Context) ([]JobStatus, error) {
+ return c.listJobsInternal(ctx)
+}
+
+func (c *Conn) listJobsInternal(ctx context.Context) ([]JobStatus, error) {
+ result := make([][]interface{}, 0)
+ if err := c.sysobj.CallWithContext(ctx, "org.freedesktop.systemd1.Manager.ListJobs", 0).Store(&result); err != nil {
+ return nil, err
+ }
+
+ resultInterface := make([]interface{}, len(result))
+ for i := range result {
+ resultInterface[i] = result[i]
+ }
+
+ status := make([]JobStatus, len(result))
+ statusInterface := make([]interface{}, len(status))
+ for i := range status {
+ statusInterface[i] = &status[i]
+ }
+
+ if err := dbus.Store(resultInterface, statusInterface...); err != nil {
+ return nil, err
+ }
+
+ return status, nil
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/properties.go b/vendor/github.com/coreos/go-systemd/v22/dbus/properties.go
new file mode 100644
index 000000000000..fb42b6273386
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/properties.go
@@ -0,0 +1,237 @@
+// Copyright 2015 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+import (
+ "github.com/godbus/dbus/v5"
+)
+
+// From the systemd docs:
+//
+// The properties array of StartTransientUnit() may take many of the settings
+// that may also be configured in unit files. Not all parameters are currently
+// accepted though, but we plan to cover more properties with future release.
+// Currently you may set the Description, Slice and all dependency types of
+// units, as well as RemainAfterExit, ExecStart for service units,
+// TimeoutStopUSec and PIDs for scope units, and CPUAccounting, CPUShares,
+// BlockIOAccounting, BlockIOWeight, BlockIOReadBandwidth,
+// BlockIOWriteBandwidth, BlockIODeviceWeight, MemoryAccounting, MemoryLimit,
+// DevicePolicy, DeviceAllow for services/scopes/slices. These fields map
+// directly to their counterparts in unit files and as normal D-Bus object
+// properties. The exception here is the PIDs field of scope units which is
+// used for construction of the scope only and specifies the initial PIDs to
+// add to the scope object.
+
+type Property struct {
+ Name string
+ Value dbus.Variant
+}
+
+type PropertyCollection struct {
+ Name string
+ Properties []Property
+}
+
+type execStart struct {
+ Path string // the binary path to execute
+ Args []string // an array with all arguments to pass to the executed command, starting with argument 0
+ UncleanIsFailure bool // a boolean whether it should be considered a failure if the process exits uncleanly
+}
+
+// PropExecStart sets the ExecStart service property. The first argument is a
+// slice with the binary path to execute followed by the arguments to pass to
+// the executed command. See
+// http://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart=
+func PropExecStart(command []string, uncleanIsFailure bool) Property {
+ execStarts := []execStart{
+ {
+ Path: command[0],
+ Args: command,
+ UncleanIsFailure: uncleanIsFailure,
+ },
+ }
+
+ return Property{
+ Name: "ExecStart",
+ Value: dbus.MakeVariant(execStarts),
+ }
+}
+
+// PropRemainAfterExit sets the RemainAfterExit service property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.service.html#RemainAfterExit=
+func PropRemainAfterExit(b bool) Property {
+ return Property{
+ Name: "RemainAfterExit",
+ Value: dbus.MakeVariant(b),
+ }
+}
+
+// PropType sets the Type service property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.service.html#Type=
+func PropType(t string) Property {
+ return Property{
+ Name: "Type",
+ Value: dbus.MakeVariant(t),
+ }
+}
+
+// PropDescription sets the Description unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit#Description=
+func PropDescription(desc string) Property {
+ return Property{
+ Name: "Description",
+ Value: dbus.MakeVariant(desc),
+ }
+}
+
+func propDependency(name string, units []string) Property {
+ return Property{
+ Name: name,
+ Value: dbus.MakeVariant(units),
+ }
+}
+
+// PropRequires sets the Requires unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requires=
+func PropRequires(units ...string) Property {
+ return propDependency("Requires", units)
+}
+
+// PropRequiresOverridable sets the RequiresOverridable unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiresOverridable=
+func PropRequiresOverridable(units ...string) Property {
+ return propDependency("RequiresOverridable", units)
+}
+
+// PropRequisite sets the Requisite unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requisite=
+func PropRequisite(units ...string) Property {
+ return propDependency("Requisite", units)
+}
+
+// PropRequisiteOverridable sets the RequisiteOverridable unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequisiteOverridable=
+func PropRequisiteOverridable(units ...string) Property {
+ return propDependency("RequisiteOverridable", units)
+}
+
+// PropWants sets the Wants unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Wants=
+func PropWants(units ...string) Property {
+ return propDependency("Wants", units)
+}
+
+// PropBindsTo sets the BindsTo unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=
+func PropBindsTo(units ...string) Property {
+ return propDependency("BindsTo", units)
+}
+
+// PropRequiredBy sets the RequiredBy unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiredBy=
+func PropRequiredBy(units ...string) Property {
+ return propDependency("RequiredBy", units)
+}
+
+// PropRequiredByOverridable sets the RequiredByOverridable unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiredByOverridable=
+func PropRequiredByOverridable(units ...string) Property {
+ return propDependency("RequiredByOverridable", units)
+}
+
+// PropWantedBy sets the WantedBy unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#WantedBy=
+func PropWantedBy(units ...string) Property {
+ return propDependency("WantedBy", units)
+}
+
+// PropBoundBy sets the BoundBy unit property. See
+// http://www.freedesktop.org/software/systemd/main/systemd.unit.html#BoundBy=
+func PropBoundBy(units ...string) Property {
+ return propDependency("BoundBy", units)
+}
+
+// PropConflicts sets the Conflicts unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Conflicts=
+func PropConflicts(units ...string) Property {
+ return propDependency("Conflicts", units)
+}
+
+// PropConflictedBy sets the ConflictedBy unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#ConflictedBy=
+func PropConflictedBy(units ...string) Property {
+ return propDependency("ConflictedBy", units)
+}
+
+// PropBefore sets the Before unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Before=
+func PropBefore(units ...string) Property {
+ return propDependency("Before", units)
+}
+
+// PropAfter sets the After unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#After=
+func PropAfter(units ...string) Property {
+ return propDependency("After", units)
+}
+
+// PropOnFailure sets the OnFailure unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#OnFailure=
+func PropOnFailure(units ...string) Property {
+ return propDependency("OnFailure", units)
+}
+
+// PropTriggers sets the Triggers unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Triggers=
+func PropTriggers(units ...string) Property {
+ return propDependency("Triggers", units)
+}
+
+// PropTriggeredBy sets the TriggeredBy unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#TriggeredBy=
+func PropTriggeredBy(units ...string) Property {
+ return propDependency("TriggeredBy", units)
+}
+
+// PropPropagatesReloadTo sets the PropagatesReloadTo unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#PropagatesReloadTo=
+func PropPropagatesReloadTo(units ...string) Property {
+ return propDependency("PropagatesReloadTo", units)
+}
+
+// PropRequiresMountsFor sets the RequiresMountsFor unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiresMountsFor=
+func PropRequiresMountsFor(units ...string) Property {
+ return propDependency("RequiresMountsFor", units)
+}
+
+// PropSlice sets the Slice unit property. See
+// http://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#Slice=
+func PropSlice(slice string) Property {
+ return Property{
+ Name: "Slice",
+ Value: dbus.MakeVariant(slice),
+ }
+}
+
+// PropPids sets the PIDs field of scope units used in the initial construction
+// of the scope only and specifies the initial PIDs to add to the scope object.
+// See https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/#properties
+func PropPids(pids ...uint32) Property {
+ return Property{
+ Name: "PIDs",
+ Value: dbus.MakeVariant(pids),
+ }
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/set.go b/vendor/github.com/coreos/go-systemd/v22/dbus/set.go
new file mode 100644
index 000000000000..17c5d485657d
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/set.go
@@ -0,0 +1,47 @@
+// Copyright 2015 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+type set struct {
+ data map[string]bool
+}
+
+func (s *set) Add(value string) {
+ s.data[value] = true
+}
+
+func (s *set) Remove(value string) {
+ delete(s.data, value)
+}
+
+func (s *set) Contains(value string) (exists bool) {
+ _, exists = s.data[value]
+ return
+}
+
+func (s *set) Length() int {
+ return len(s.data)
+}
+
+func (s *set) Values() (values []string) {
+ for val := range s.data {
+ values = append(values, val)
+ }
+ return
+}
+
+func newSet() *set {
+ return &set{make(map[string]bool)}
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/subscription.go b/vendor/github.com/coreos/go-systemd/v22/dbus/subscription.go
new file mode 100644
index 000000000000..7e370fea2121
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/subscription.go
@@ -0,0 +1,333 @@
+// Copyright 2015 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+import (
+ "errors"
+ "log"
+ "time"
+
+ "github.com/godbus/dbus/v5"
+)
+
+const (
+ cleanIgnoreInterval = int64(10 * time.Second)
+ ignoreInterval = int64(30 * time.Millisecond)
+)
+
+// Subscribe sets up this connection to subscribe to all systemd dbus events.
+// This is required before calling SubscribeUnits. When the connection closes
+// systemd will automatically stop sending signals so there is no need to
+// explicitly call Unsubscribe().
+func (c *Conn) Subscribe() error {
+ c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
+ "type='signal',interface='org.freedesktop.systemd1.Manager',member='UnitNew'")
+ c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0,
+ "type='signal',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'")
+
+ return c.sigobj.Call("org.freedesktop.systemd1.Manager.Subscribe", 0).Store()
+}
+
+// Unsubscribe this connection from systemd dbus events.
+func (c *Conn) Unsubscribe() error {
+ return c.sigobj.Call("org.freedesktop.systemd1.Manager.Unsubscribe", 0).Store()
+}
+
+func (c *Conn) dispatch() {
+ ch := make(chan *dbus.Signal, signalBuffer)
+
+ c.sigconn.Signal(ch)
+
+ go func() {
+ for {
+ signal, ok := <-ch
+ if !ok {
+ return
+ }
+
+ if signal.Name == "org.freedesktop.systemd1.Manager.JobRemoved" {
+ c.jobComplete(signal)
+ }
+
+ if c.subStateSubscriber.updateCh == nil &&
+ c.propertiesSubscriber.updateCh == nil {
+ continue
+ }
+
+ var unitPath dbus.ObjectPath
+ switch signal.Name {
+ case "org.freedesktop.systemd1.Manager.JobRemoved":
+ unitName := signal.Body[2].(string)
+ c.sysobj.Call("org.freedesktop.systemd1.Manager.GetUnit", 0, unitName).Store(&unitPath)
+ case "org.freedesktop.systemd1.Manager.UnitNew":
+ unitPath = signal.Body[1].(dbus.ObjectPath)
+ case "org.freedesktop.DBus.Properties.PropertiesChanged":
+ if signal.Body[0].(string) == "org.freedesktop.systemd1.Unit" {
+ unitPath = signal.Path
+
+ if len(signal.Body) >= 2 {
+ if changed, ok := signal.Body[1].(map[string]dbus.Variant); ok {
+ c.sendPropertiesUpdate(unitPath, changed)
+ }
+ }
+ }
+ }
+
+ if unitPath == dbus.ObjectPath("") {
+ continue
+ }
+
+ c.sendSubStateUpdate(unitPath)
+ }
+ }()
+}
+
+// SubscribeUnits returns two unbuffered channels which will receive all changed units every
+// interval. Deleted units are sent as nil.
+func (c *Conn) SubscribeUnits(interval time.Duration) (<-chan map[string]*UnitStatus, <-chan error) {
+ return c.SubscribeUnitsCustom(interval, 0, func(u1, u2 *UnitStatus) bool { return *u1 != *u2 }, nil)
+}
+
+// SubscribeUnitsCustom is like SubscribeUnits but lets you specify the buffer
+// size of the channels, the comparison function for detecting changes and a filter
+// function for cutting down on the noise that your channel receives.
+func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func(string) bool) (<-chan map[string]*UnitStatus, <-chan error) {
+ old := make(map[string]*UnitStatus)
+ statusChan := make(chan map[string]*UnitStatus, buffer)
+ errChan := make(chan error, buffer)
+
+ go func() {
+ for {
+ timerChan := time.After(interval)
+
+ units, err := c.ListUnits()
+ if err == nil {
+ cur := make(map[string]*UnitStatus)
+ for i := range units {
+ if filterUnit != nil && filterUnit(units[i].Name) {
+ continue
+ }
+ cur[units[i].Name] = &units[i]
+ }
+
+ // add all new or changed units
+ changed := make(map[string]*UnitStatus)
+ for n, u := range cur {
+ if oldU, ok := old[n]; !ok || isChanged(oldU, u) {
+ changed[n] = u
+ }
+ delete(old, n)
+ }
+
+ // add all deleted units
+ for oldN := range old {
+ changed[oldN] = nil
+ }
+
+ old = cur
+
+ if len(changed) != 0 {
+ statusChan <- changed
+ }
+ } else {
+ errChan <- err
+ }
+
+ <-timerChan
+ }
+ }()
+
+ return statusChan, errChan
+}
+
+type SubStateUpdate struct {
+ UnitName string
+ SubState string
+}
+
+// SetSubStateSubscriber writes to updateCh when any unit's substate changes.
+// Although this writes to updateCh on every state change, the reported state
+// may be more recent than the change that generated it (due to an unavoidable
+// race in the systemd dbus interface). That is, this method provides a good
+// way to keep a current view of all units' states, but is not guaranteed to
+// show every state transition they go through. Furthermore, state changes
+// will only be written to the channel with non-blocking writes. If updateCh
+// is full, it attempts to write an error to errCh; if errCh is full, the error
+// passes silently.
+func (c *Conn) SetSubStateSubscriber(updateCh chan<- *SubStateUpdate, errCh chan<- error) {
+ if c == nil {
+ msg := "nil receiver"
+ select {
+ case errCh <- errors.New(msg):
+ default:
+ log.Printf("full error channel while reporting: %s\n", msg)
+ }
+ return
+ }
+
+ c.subStateSubscriber.Lock()
+ defer c.subStateSubscriber.Unlock()
+ c.subStateSubscriber.updateCh = updateCh
+ c.subStateSubscriber.errCh = errCh
+}
+
+func (c *Conn) sendSubStateUpdate(unitPath dbus.ObjectPath) {
+ c.subStateSubscriber.Lock()
+ defer c.subStateSubscriber.Unlock()
+
+ if c.subStateSubscriber.updateCh == nil {
+ return
+ }
+
+ isIgnored := c.shouldIgnore(unitPath)
+ defer c.cleanIgnore()
+ if isIgnored {
+ return
+ }
+
+ info, err := c.GetUnitPathProperties(unitPath)
+ if err != nil {
+ select {
+ case c.subStateSubscriber.errCh <- err:
+ default:
+ log.Printf("full error channel while reporting: %s\n", err)
+ }
+ return
+ }
+ defer c.updateIgnore(unitPath, info)
+
+ name, ok := info["Id"].(string)
+ if !ok {
+ msg := "failed to cast info.Id"
+ select {
+ case c.subStateSubscriber.errCh <- errors.New(msg):
+ default:
+ log.Printf("full error channel while reporting: %s\n", err)
+ }
+ return
+ }
+ substate, ok := info["SubState"].(string)
+ if !ok {
+ msg := "failed to cast info.SubState"
+ select {
+ case c.subStateSubscriber.errCh <- errors.New(msg):
+ default:
+ log.Printf("full error channel while reporting: %s\n", msg)
+ }
+ return
+ }
+
+ update := &SubStateUpdate{name, substate}
+ select {
+ case c.subStateSubscriber.updateCh <- update:
+ default:
+ msg := "update channel is full"
+ select {
+ case c.subStateSubscriber.errCh <- errors.New(msg):
+ default:
+ log.Printf("full error channel while reporting: %s\n", msg)
+ }
+ return
+ }
+}
+
+// The ignore functions work around a wart in the systemd dbus interface.
+// Requesting the properties of an unloaded unit will cause systemd to send a
+// pair of UnitNew/UnitRemoved signals. Because we need to get a unit's
+// properties on UnitNew (as that's the only indication of a new unit coming up
+// for the first time), we would enter an infinite loop if we did not attempt
+// to detect and ignore these spurious signals. The signal themselves are
+// indistinguishable from relevant ones, so we (somewhat hackishly) ignore an
+// unloaded unit's signals for a short time after requesting its properties.
+// This means that we will miss e.g. a transient unit being restarted
+// *immediately* upon failure and also a transient unit being started
+// immediately after requesting its status (with systemctl status, for example,
+// because this causes a UnitNew signal to be sent which then causes us to fetch
+// the properties).
+
+func (c *Conn) shouldIgnore(path dbus.ObjectPath) bool {
+ t, ok := c.subStateSubscriber.ignore[path]
+ return ok && t >= time.Now().UnixNano()
+}
+
+func (c *Conn) updateIgnore(path dbus.ObjectPath, info map[string]interface{}) {
+ loadState, ok := info["LoadState"].(string)
+ if !ok {
+ return
+ }
+
+ // unit is unloaded - it will trigger bad systemd dbus behavior
+ if loadState == "not-found" {
+ c.subStateSubscriber.ignore[path] = time.Now().UnixNano() + ignoreInterval
+ }
+}
+
+// without this, ignore would grow unboundedly over time
+func (c *Conn) cleanIgnore() {
+ now := time.Now().UnixNano()
+ if c.subStateSubscriber.cleanIgnore < now {
+ c.subStateSubscriber.cleanIgnore = now + cleanIgnoreInterval
+
+ for p, t := range c.subStateSubscriber.ignore {
+ if t < now {
+ delete(c.subStateSubscriber.ignore, p)
+ }
+ }
+ }
+}
+
+// PropertiesUpdate holds a map of a unit's changed properties
+type PropertiesUpdate struct {
+ UnitName string
+ Changed map[string]dbus.Variant
+}
+
+// SetPropertiesSubscriber writes to updateCh when any unit's properties
+// change. Every property change reported by systemd will be sent; that is, no
+// transitions will be "missed" (as they might be with SetSubStateSubscriber).
+// However, state changes will only be written to the channel with non-blocking
+// writes. If updateCh is full, it attempts to write an error to errCh; if
+// errCh is full, the error passes silently.
+func (c *Conn) SetPropertiesSubscriber(updateCh chan<- *PropertiesUpdate, errCh chan<- error) {
+ c.propertiesSubscriber.Lock()
+ defer c.propertiesSubscriber.Unlock()
+ c.propertiesSubscriber.updateCh = updateCh
+ c.propertiesSubscriber.errCh = errCh
+}
+
+// we don't need to worry about shouldIgnore() here because
+// sendPropertiesUpdate doesn't call GetProperties()
+func (c *Conn) sendPropertiesUpdate(unitPath dbus.ObjectPath, changedProps map[string]dbus.Variant) {
+ c.propertiesSubscriber.Lock()
+ defer c.propertiesSubscriber.Unlock()
+
+ if c.propertiesSubscriber.updateCh == nil {
+ return
+ }
+
+ update := &PropertiesUpdate{unitName(unitPath), changedProps}
+
+ select {
+ case c.propertiesSubscriber.updateCh <- update:
+ default:
+ msg := "update channel is full"
+ select {
+ case c.propertiesSubscriber.errCh <- errors.New(msg):
+ default:
+ log.Printf("full error channel while reporting: %s\n", msg)
+ }
+ return
+ }
+}
diff --git a/vendor/github.com/coreos/go-systemd/v22/dbus/subscription_set.go b/vendor/github.com/coreos/go-systemd/v22/dbus/subscription_set.go
new file mode 100644
index 000000000000..5b408d5847ad
--- /dev/null
+++ b/vendor/github.com/coreos/go-systemd/v22/dbus/subscription_set.go
@@ -0,0 +1,57 @@
+// Copyright 2015 CoreOS, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package dbus
+
+import (
+ "time"
+)
+
+// SubscriptionSet returns a subscription set which is like conn.Subscribe but
+// can filter to only return events for a set of units.
+type SubscriptionSet struct {
+ *set
+ conn *Conn
+}
+
+func (s *SubscriptionSet) filter(unit string) bool {
+ return !s.Contains(unit)
+}
+
+// Subscribe starts listening for dbus events for all of the units in the set.
+// Returns channels identical to conn.SubscribeUnits.
+func (s *SubscriptionSet) Subscribe() (<-chan map[string]*UnitStatus, <-chan error) {
+ // TODO: Make fully evented by using systemd 209 with properties changed values
+ return s.conn.SubscribeUnitsCustom(time.Second, 0,
+ mismatchUnitStatus,
+ func(unit string) bool { return s.filter(unit) },
+ )
+}
+
+// NewSubscriptionSet returns a new subscription set.
+func (conn *Conn) NewSubscriptionSet() *SubscriptionSet {
+ return &SubscriptionSet{newSet(), conn}
+}
+
+// mismatchUnitStatus returns true if the provided UnitStatus objects
+// are not equivalent. false is returned if the objects are equivalent.
+// Only the Name, Description and state-related fields are used in
+// the comparison.
+func mismatchUnitStatus(u1, u2 *UnitStatus) bool {
+ return u1.Name != u2.Name ||
+ u1.Description != u2.Description ||
+ u1.LoadState != u2.LoadState ||
+ u1.ActiveState != u2.ActiveState ||
+ u1.SubState != u2.SubState
+}
diff --git a/vendor/github.com/docker/cli/AUTHORS b/vendor/github.com/docker/cli/AUTHORS
new file mode 100644
index 000000000000..483743c99214
--- /dev/null
+++ b/vendor/github.com/docker/cli/AUTHORS
@@ -0,0 +1,852 @@
+# File @generated by scripts/docs/generate-authors.sh. DO NOT EDIT.
+# This file lists all contributors to the repository.
+# See scripts/docs/generate-authors.sh to make modifications.
+
+Aanand Prasad
+Aaron L. Xu
+Aaron Lehmann
+Aaron.L.Xu
+Abdur Rehman
+Abhinandan Prativadi
+Abin Shahab
+Abreto FU
+Ace Tang
+Addam Hardy
+Adolfo Ochagavía
+Adrian Plata
+Adrien Duermael
+Adrien Folie
+Ahmet Alp Balkan
+Aidan Feldman
+Aidan Hobson Sayers
+AJ Bowen
+Akhil Mohan
+Akihiro Suda
+Akim Demaille
+Alan Thompson
+Albert Callarisa
+Alberto Roura
+Albin Kerouanton
+Aleksa Sarai
+Aleksander Piotrowski
+Alessandro Boch
+Alex Couture-Beil
+Alex Mavrogiannis
+Alex Mayer
+Alexander Boyd
+Alexander Larsson
+Alexander Morozov
+Alexander Ryabov
+Alexandre González
+Alexey Igrychev
+Alexis Couvreur
+Alfred Landrum
+Alicia Lauerman
+Allen Sun
+Alvin Deng
+Amen Belayneh
+Amey Shrivastava <72866602+AmeyShrivastava@users.noreply.github.com>
+Amir Goldstein
+Amit Krishnan
+Amit Shukla
+Amy Lindburg
+Anca Iordache
+Anda Xu
+Andrea Luzzardi
+Andreas Köhler
+Andres G. Aragoneses
+Andres Leon Rangel
+Andrew France
+Andrew Hsu
+Andrew Macpherson
+Andrew McDonnell
+Andrew Po
+Andrey Petrov
+Andrii Berehuliak
+André Martins
+Andy Goldstein
+Andy Rothfusz
+Anil Madhavapeddy
+Ankush Agarwal
+Anne Henmi
+Anton Polonskiy
+Antonio Murdaca
+Antonis Kalipetis
+Anusha Ragunathan
+Ao Li
+Arash Deshmeh
+Arko Dasgupta
+Arnaud Porterie
+Arnaud Rebillout
+Arthur Peka
+Ashwini Oruganti
+Azat Khuyiyakhmetov
+Bardia Keyoumarsi
+Barnaby Gray
+Bastiaan Bakker
+BastianHofmann
+Ben Bodenmiller
+Ben Bonnefoy
+Ben Creasy
+Ben Firshman
+Benjamin Boudreau
+Benjamin Böhmke
+Benjamin Nater
+Benoit Sigoure
+Bhumika Bayani
+Bill Wang
+Bin Liu
+Bingshen Wang
+Bishal Das
+Boaz Shuster
+Bogdan Anton
+Boris Pruessmann
+Brad Baker
+Bradley Cicenas
+Brandon Mitchell
+Brandon Philips
+Brent Salisbury
+Bret Fisher
+Brian (bex) Exelbierd
+Brian Goff
+Brian Wieder
+Bruno Sousa
+Bryan Bess
+Bryan Boreham
+Bryan Murphy
+bryfry
+Cameron Spear
+Cao Weiwei
+Carlo Mion
+Carlos Alexandro Becker
+Carlos de Paula
+Ce Gao
+Cedric Davies
+Cezar Sa Espinola
+Chad Faragher
+Chao Wang
+Charles Chan
+Charles Law
+Charles Smith
+Charlie Drage
+Charlotte Mach
+ChaYoung You
+Chee Hau Lim
+Chen Chuanliang
+Chen Hanxiao
+Chen Mingjie
+Chen Qiu
+Chris Couzens
+Chris Gavin
+Chris Gibson
+Chris McKinnel
+Chris Snow
+Chris Vermilion
+Chris Weyl
+Christian Persson
+Christian Stefanescu
+Christophe Robin
+Christophe Vidal
+Christopher Biscardi
+Christopher Crone
+Christopher Jones
+Christopher Svensson
+Christy Norman
+Chun Chen
+Clinton Kitson
+Coenraad Loubser
+Colin Hebert
+Collin Guarino
+Colm Hally
+Comical Derskeal <27731088+derskeal@users.noreply.github.com>
+Conner Crosby
+Corey Farrell
+Corey Quon
+Cory Bennet
+Craig Wilhite
+Cristian Staretu
+Daehyeok Mun
+Dafydd Crosby
+Daisuke Ito
+dalanlan
+Damien Nadé
+Dan Cotora
+Daniel Artine
+Daniel Cassidy
+Daniel Dao
+Daniel Farrell
+Daniel Gasienica
+Daniel Goosen
+Daniel Helfand
+Daniel Hiltgen
+Daniel J Walsh
+Daniel Nephin
+Daniel Norberg
+Daniel Watkins
+Daniel Zhang
+Daniil Nikolenko
+Danny Berger
+Darren Shepherd
+Darren Stahl
+Dattatraya Kumbhar
+Dave Goodchild
+Dave Henderson
+Dave Tucker