\ No newline at end of file
diff --git a/v2.1.0/admin-guide/admin-configuration/index.html b/v2.1.0/admin-guide/admin-configuration/index.html
index 91d0d7b..e2cce2e 100644
--- a/v2.1.0/admin-guide/admin-configuration/index.html
+++ b/v2.1.0/admin-guide/admin-configuration/index.html
@@ -1,4 +1,4 @@
- Configuring zot - zotregistry.dev
The registry administrator configures zot primarily through settings in the configuration file.
Using the information in this guide, you can compose a configuration file with the settings and features you require for your zot registry server.
Before launching zot with a new configuration, we recommend that you verify the syntax of your configuration as described in Verifying the configuration file.
The configuration file is a JSON or YAML file that contains all configuration settings for zot functions such as:
network
storage
authentication
authorization
logging
metrics
synchronization with other registries
clustering
The zot service is initiated with the zot serve command followed by the name of a configuration file, as in this example:
zot serve config.json
The instructions and examples in this guide use zot as the name of the zot executable file. The examples do not include the path to the executable file.
When you first build zot or deploy an image or container from the distribution, a basic configuration file config.json is created. This initial file is similar to the following example:
The registry administrator configures zot primarily through settings in the configuration file.
Using the information in this guide, you can compose a configuration file with the settings and features you require for your zot registry server.
Before launching zot with a new configuration, we recommend that you verify the syntax of your configuration as described in Verifying the configuration file.
The configuration file is a JSON or YAML file that contains all configuration settings for zot functions such as:
network
storage
authentication
authorization
logging
metrics
synchronization with other registries
clustering
The zot service is initiated with the zot serve command followed by the name of a configuration file, as in this example:
zot serve config.json
The instructions and examples in this guide use zot as the name of the zot executable file. The examples do not include the path to the executable file.
When you first build zot or deploy an image or container from the distribution, a basic configuration file config.json is created. This initial file is similar to the following example:
zot is officially supported on Linux and Apple MacOS platforms, using Intel or ARM processors. However, development should be possible on any platform that supports the golang toolchain.
OS
ARCH
Platform
linux
amd64
Intel-based Linux servers
linux
arm64
ARM-based servers and Raspberry Pi4
darwin
amd64
Intel-based MacOS
darwin
arm64
ARM-based MacOS
freebsd
amd64
Intel-based FreeBSD*
freebsd
arm64
ARM-based FreeBSD*
* NOTE: While binary images are available for FreeBSD, building container images is not supported at this time.
Executable binary zot images are available for multiple platforms and architectures and with full or minimal implementations.
Refer to Released Images for zot for information about available zot images along with information about image variations, image locations, and image naming formats.
Executable binary images for supported server platforms and architectures are available from the zot package repository in GitHub.
You can download the appropriate binary image and run it directly on your server, or you can use a container management tool such as Podman, runc, Helm, or Docker to fetch and deploy the image in a container on your server.
For convenience, you can rename the binary image file to simply zot.
zot is officially supported on Linux and Apple MacOS platforms, using Intel or ARM processors. However, development should be possible on any platform that supports the golang toolchain.
OS
ARCH
Platform
linux
amd64
Intel-based Linux servers
linux
arm64
ARM-based servers and Raspberry Pi4
darwin
amd64
Intel-based MacOS
darwin
arm64
ARM-based MacOS
freebsd
amd64
Intel-based FreeBSD*
freebsd
arm64
ARM-based FreeBSD*
* NOTE: While binary images are available for FreeBSD, building container images is not supported at this time.
Executable binary zot images are available for multiple platforms and architectures and with full or minimal implementations.
Refer to Released Images for zot for information about available zot images along with information about image variations, image locations, and image naming formats.
Executable binary images for supported server platforms and architectures are available from the zot package repository in GitHub.
You can download the appropriate binary image and run it directly on your server, or you can use a container management tool such as Podman, runc, Helm, or Docker to fetch and deploy the image in a container on your server.
For convenience, you can rename the binary image file to simply zot.
A robust set of authentication/authorization options are supported:
Authentication
TLS, including mTLS
Username/password or token-based user authentication
LDAP
htpasswd
OAuth2 with bearer token
Authorization
Powerful identity-based access controls for repositories or specific repository paths
OpenID/OAuth2 social login with Google, GitHub, GitLab, and dex
The zot configuration model supports both authentication and authorization. Authentication credentials allow access to zot HTTP APIs. Authorization policies provide fine-grained control of the actions each authenticated user can perform in the registry.
Because authentication credentials are passed over HTTP, it is imperative that TLS be enabled. You can enable and configure TLS authentication in the zot configuration file, as shown in the following example.
"http":{
+ Authentication and Authorization - zotregistry.dev
A robust set of authentication/authorization options are supported:
Authentication
TLS, including mTLS
Username/password or token-based user authentication
LDAP
htpasswd
OAuth2 with bearer token
Authorization
Powerful identity-based access controls for repositories or specific repository paths
OpenID/OAuth2 social login with Google, GitHub, GitLab, and dex
The zot configuration model supports both authentication and authorization. Authentication credentials allow access to zot HTTP APIs. Authorization policies provide fine-grained control of the actions each authenticated user can perform in the registry.
Because authentication credentials are passed over HTTP, it is imperative that TLS be enabled. You can enable and configure TLS authentication in the zot configuration file, as shown in the following example.
The zb tool is useful for benchmarking OCI registry workloads in scenarios such as the following:
comparing configuration changes
comparing software versions
comparing hardware/deployment environments
comparing with other registries
With the zb tool, you can benchmark a zot registry or any other container image registry that conforms to the OCI Distribution Specification published by the Open Container Initiative (OCI).
We recommend installing and benchmarking with zb when you install zot.
The zb project is hosted with zot on GitHub at project-zot. From GitHub, you can download the zb binary or you can build zb from the source. You can also directly run the released docker image.
Download the executable binary for your server platform and architecture under "Assets" on the GitHub zot releases page.
The binary image is named using the target platform and architecture from the Supported platforms and architectures table. For example, the binary for an Intel-based MacOS server is zb-darwin-amd64.
To build the zb binary, copy or clone the zot project from GitHub and execute the make bench command in the zot directory. Use the same command options that you used to build zot, as shown:
make OS=os ARCH=architecture bench
For example, the following command builds zb for an Intel-based MacOS server:
make OS=darwin ARCH=amd64 bench
In this example, the resulting executable file is zb-darwin-amd64 in the zot/bin directory.
A sample Dockerfile for zb is available at Dockerfile-zb.
The original filename of the executable file will reflect the build options, such as zb-linux-amd64. For convenience, you can rename the executable to simply zb.
The instructions and examples in this guide use zb as the name of the executable file.
The zb tool is useful for benchmarking OCI registry workloads in scenarios such as the following:
comparing configuration changes
comparing software versions
comparing hardware/deployment environments
comparing with other registries
With the zb tool, you can benchmark a zot registry or any other container image registry that conforms to the OCI Distribution Specification published by the Open Container Initiative (OCI).
We recommend installing and benchmarking with zb when you install zot.
The zb project is hosted with zot on GitHub at project-zot. From GitHub, you can download the zb binary or you can build zb from the source. You can also directly run the released docker image.
Download the executable binary for your server platform and architecture under "Assets" on the GitHub zot releases page.
The binary image is named using the target platform and architecture from the Supported platforms and architectures table. For example, the binary for an Intel-based MacOS server is zb-darwin-amd64.
To build the zb binary, copy or clone the zot project from GitHub and execute the make bench command in the zot directory. Use the same command options that you used to build zot, as shown:
make OS=os ARCH=architecture bench
For example, the following command builds zb for an Intel-based MacOS server:
make OS=darwin ARCH=amd64 bench
In this example, the resulting executable file is zb-darwin-amd64 in the zot/bin directory.
A sample Dockerfile for zb is available at Dockerfile-zb.
The original filename of the executable file will reflect the build options, such as zb-linux-amd64. For convenience, you can rename the executable to simply zb.
The instructions and examples in this guide use zb as the name of the executable file.
cosigned for validating container images before deployment
The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes.
This document describes a step-by-step procedure towards achieving an OCI-native secure software supply chain using zot in collaboration with other open source tools. The following diagram shows a portion of the CI/CD pipeline.
stacker is a standalone tool for building OCI images via a declarative yaml format. The output of the build process is a container image in an OCI layout.
cosigned for validating container images before deployment
The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes.
This document describes a step-by-step procedure towards achieving an OCI-native secure software supply chain using zot in collaboration with other open source tools. The following diagram shows a portion of the CI/CD pipeline.
stacker is a standalone tool for building OCI images via a declarative yaml format. The output of the build process is a container image in an OCI layout.
zot is a production-ready vendor-neutral OCI image registry server purely based on the OCI Distribution Specification. If stacker is used to build the OCI image, it can also be used to publish the built image to an OCI registry.
High availability of the zot registry is supported by the following features:
Stateless zot instances to simplify scale out
Shared remote storage
Bare-metal and Kubernetes deployments
To ensure high availability of the registry, zot supports a clustering scheme with stateless zot instances/replicas fronted by a load balancer and a shared remote backend storage. This scheme allows the registry service to remain available even if a few replicas fail or become unavailable. Load balancing across many zot replicas can also increase aggregate network throughput.
Beginning with zot release v2.1.0, you can design a highly scalable cluster that does not require configuring the load balancer to direct repository queries to specific zot instances within the cluster. See Scale-out clustering. Scale-out clustering is the preferred method if you are running v2.1.0 or later.
Clustering is supported in both bare-metal and Kubernetes environments.
In a stateless clustering scheme, the image data is stored in the remote storage backend and the registry cache is disabled by turning off deduplication.
The OCI Distribution Specification imposes certain rules about the HTTP URI paths to which various ecosystem tools must conform. Consider these rules when setting the HTTP prefixes during load balancing and ingress gateway configuration.
Clustering is supported by using multiple stateless zot replicas with shared S3 storage and an HAProxy (with sticky session) load balancing traffic to the replicas. Each replica is responsible for one or more repositories.
High availability of the zot registry is supported by the following features:
Stateless zot instances to simplify scale out
Shared remote storage
Bare-metal and Kubernetes deployments
To ensure high availability of the registry, zot supports a clustering scheme with stateless zot instances/replicas fronted by a load balancer and a shared remote backend storage. This scheme allows the registry service to remain available even if a few replicas fail or become unavailable. Load balancing across many zot replicas can also increase aggregate network throughput.
Beginning with zot release v2.1.0, you can design a highly scalable cluster that does not require configuring the load balancer to direct repository queries to specific zot instances within the cluster. See Scale-out clustering. Scale-out clustering is the preferred method if you are running v2.1.0 or later.
Clustering is supported in both bare-metal and Kubernetes environments.
In a stateless clustering scheme, the image data is stored in the remote storage backend and the registry cache is disabled by turning off deduplication.
The OCI Distribution Specification imposes certain rules about the HTTP URI paths to which various ecosystem tools must conform. Consider these rules when setting the HTTP prefixes during load balancing and ingress gateway configuration.
Clustering is supported by using multiple stateless zot replicas with shared S3 storage and an HAProxy (with sticky session) load balancing traffic to the replicas. Each replica is responsible for one or more repositories.
A GraphQL backend server within zot's registry search engine provides efficient and enhanced search capabilities. You can submit a GraphQL structured query as an API call or you can use a browser to access the GraphQL Playground, an interactive graphical environment for GraphQL queries.
GraphQL is a query language for APIs. A GraphQL server, as implemented in zot's registry search engine, executes GraphQL queries that match schema recognized by the server. In response, the server returns a structure containing the requested information. The schema currently recognized by zot are those that correspond to the queries listed in What GraphQL queries are supported.
To perform a search, compose a GraphQL structured query for a specific search and deliver it to zot using one of the methods described in the following sections.
You can submit a GraphQL structured query as the HTML data payload in a direct API call using a shell tool such as cURL or Postman. GraphQL queries are sent to the zot search extension API:
/v2/_zot/ext/search
+ Using GraphQL for Enhanced Searches - zotregistry.dev
A GraphQL backend server within zot's registry search engine provides efficient and enhanced search capabilities. You can submit a GraphQL structured query as an API call or you can use a browser to access the GraphQL Playground, an interactive graphical environment for GraphQL queries.
GraphQL is a query language for APIs. A GraphQL server, as implemented in zot's registry search engine, executes GraphQL queries that match schema recognized by the server. In response, the server returns a structure containing the requested information. The schema currently recognized by zot are those that correspond to the queries listed in What GraphQL queries are supported.
To perform a search, compose a GraphQL structured query for a specific search and deliver it to zot using one of the methods described in the following sections.
You can submit a GraphQL structured query as the HTML data payload in a direct API call using a shell tool such as cURL or Postman. GraphQL queries are sent to the zot search extension API:
/v2/_zot/ext/search
The following example submits a zot GraphQL query using cURL:
A highly available zot registry can be easily implemented using zot's registry synchronization feature.
In the zot configuration, the sync extension allows a zot instance to mirror another zot instance with various container image download policies, including on-demand and periodic downloads. You can use the zot sync function combined with a load balancer such as HAProxy to implement a highly available registry.
Two failover configurations are possible:
Active/standby
Registry requests are sent by the load balancer to the active zot instance, while a standby instance mirrors the active. If the load balancer detects a failure of the active instance, it then sends requests to the standby instance.
Active/active
Registry requests are load-balanced between two zot instances, each of which mirrors the other.
The highly available zot registry described in this article differs from zot clustering. Although zot clustering provides a level of high availability, the instances share common storage, whose failure would affect all instances. In the method described in this article, each instance has its own storage, providing an additional level of safety.
An active/standby zot registry can be implemented between two zot instances by configuring the sync extension in the standby instance to mirror the other instance. In this scheme:
a load balancer such as HAProxy is deployed for active/passive load balancing of the zot instances
each zot instance is configured as a standalone registry with its own storage
the standby zot instance has its sync extension enabled to periodically synchronize with (mirror) the active instance
With periodic synchronization, a window of failure exists between synchronization actions. For example, if an image is posted to the active instance soon after the standby has synchronized with the active, and then the active fails, the standby will not have the new image. To minimize this exposure, we recommend keeping the synchronization period as small as practical.
An active/active zot registry can be implemented between two zot instances by configuring the sync extension in each instance to point to the other instance. In this scheme:
a load balancer such as HAProxy or a DNS-based routing scheme is deployed for load balancing between zot instances
each zot instance is configured as a standalone registry with its own storage
each zot instance has its sync extension enabled to periodically synchronize with the other instance
With periodic synchronization, a window of failure exists between synchronization actions. For example, if an image is posted to instance A soon after instance B has synchronized with instance A, and then instance A fails, instance B will not have the new image. To minimize this exposure, we recommend keeping the synchronization period as small as practical.
Immutable image tag support is achieved by leveraging authorization policies.
It is considered best practice to avoid changing the content once a software version has been released. While zot does not have an explicit configuration flag to make image tags immutable, the same effect can be achieved with authorization as follows.
By setting the defaultPolicy to "read" and "create" for a particular repository, images can be pushed (once) and pulled but further updates are rejected.
Immutable image tag support is achieved by leveraging authorization policies.
It is considered best practice to avoid changing the content once a software version has been released. While zot does not have an explicit configuration flag to make image tags immutable, the same effect can be achieved with authorization as follows.
By setting the defaultPolicy to "read" and "create" for a particular repository, images can be pushed (once) and pulled but further updates are rejected.
{..."repositories":{"**":{
diff --git a/v2.1.0/articles/kind-deploy/index.html b/v2.1.0/articles/kind-deploy/index.html
index 2db5c1d..d6e4461 100644
--- a/v2.1.0/articles/kind-deploy/index.html
+++ b/v2.1.0/articles/kind-deploy/index.html
@@ -1,4 +1,4 @@
- Using kind for Deployment Testing - zotregistry.dev
The procedure described installs a kind cluster with a zot registry at localhost:5001 and then loads and runs a "hello" app to test the installation. Although the procedure is given as a series of steps, you can find a complete shell script to perform these steps at the end of this article.
The procedure described installs a kind cluster with a zot registry at localhost:5001 and then loads and runs a "hello" app to test the installation. Although the procedure is given as a series of steps, you can find a complete shell script to perform these steps at the end of this article.
A zot registry can mirror one or more upstream OCI registries, including popular cloud registries such as Docker Hub and Google Container Registry (gcr.io).
A key use case for zot is to act as a mirror for upstream registries. If an upstream registry is OCI distribution-spec conformant for pulling images, you can use zot's sync feature to implement a downstream mirror, synchronizing OCI images and corresponding artifacts. Because synchronized images are stored in zot's local storage, registry mirroring allows for a fully distributed disconnected container image build pipeline. Container image operations terminate in local zot storage, which may reduce network latency and costs.
Because zot is a OCI-only registry, any upstream image stored in the Docker image format is converted to OCI format when downloading to zot. In the conversion, some non-OCI attributes may be lost. Signatures, for example, are removed due to the incompatibility between formats.
For mirroring an upstream registry, two common use cases are a fully mirrored or a pull through (on-demand) cache registry.
As with git, wherein every clone is a full repository, you can configure your local zot instance to be a fully mirrored OCI registry. For this mode, configure zot for synchronization by periodic polling, not on-demand. Zot copies and caches a full copy of every image on the upstream registry, updating the cache whenever polling discovers a change in content or image version at the upstream registry.
For a pull through cache mirrored registry, configure zot for on-demand synchronization. When an image is first requested from the local zot registry, the image is downloaded from the upstream registry and cached in local storage. Subsequent requests for the same image are served from zot's cache. Images that have not been requested are not downloaded. If a polling interval is also configured, zot periodically polls the upstream registry for changes, updating any cached images if changes are detected.
Because Docker Hub rate-limits pulls and does not support catalog listing, do not use polled mirroring with Docker Hub. Use only on-demand mirroring with Docker Hub.
Mirroring zot using the sync feature allows you to easily migrate a registry. In situations such as the following, zot mirroring provides an easy solution.
Migrating an existing zot or non-zot registry to a new location.
Provided that the source registry is OCI-compliant for image pulls, you can mirror the registry to a new zot registry, delete the old registry, and reroute network traffic to the new registry.
Updating (or downgrading) a zot registry.
To minimize downtime during an update, or to avoid any incompatibilities between zot releases that would preclude an in-place update, you can bring up a new zot registry with the desired release and then migrate from the existing registry.
To ensure a complete migration of the registry contents, set a polling interval in the configuration of the new zot registry and set prefix to **, as shown in this example:
A zot registry can mirror one or more upstream OCI registries, including popular cloud registries such as Docker Hub and Google Container Registry (gcr.io).
A key use case for zot is to act as a mirror for upstream registries. If an upstream registry is OCI distribution-spec conformant for pulling images, you can use zot's sync feature to implement a downstream mirror, synchronizing OCI images and corresponding artifacts. Because synchronized images are stored in zot's local storage, registry mirroring allows for a fully distributed disconnected container image build pipeline. Container image operations terminate in local zot storage, which may reduce network latency and costs.
Because zot is a OCI-only registry, any upstream image stored in the Docker image format is converted to OCI format when downloading to zot. In the conversion, some non-OCI attributes may be lost. Signatures, for example, are removed due to the incompatibility between formats.
For mirroring an upstream registry, two common use cases are a fully mirrored or a pull through (on-demand) cache registry.
As with git, wherein every clone is a full repository, you can configure your local zot instance to be a fully mirrored OCI registry. For this mode, configure zot for synchronization by periodic polling, not on-demand. Zot copies and caches a full copy of every image on the upstream registry, updating the cache whenever polling discovers a change in content or image version at the upstream registry.
For a pull through cache mirrored registry, configure zot for on-demand synchronization. When an image is first requested from the local zot registry, the image is downloaded from the upstream registry and cached in local storage. Subsequent requests for the same image are served from zot's cache. Images that have not been requested are not downloaded. If a polling interval is also configured, zot periodically polls the upstream registry for changes, updating any cached images if changes are detected.
Because Docker Hub rate-limits pulls and does not support catalog listing, do not use polled mirroring with Docker Hub. Use only on-demand mirroring with Docker Hub.
Mirroring zot using the sync feature allows you to easily migrate a registry. In situations such as the following, zot mirroring provides an easy solution.
Migrating an existing zot or non-zot registry to a new location.
Provided that the source registry is OCI-compliant for image pulls, you can mirror the registry to a new zot registry, delete the old registry, and reroute network traffic to the new registry.
Updating (or downgrading) a zot registry.
To minimize downtime during an update, or to avoid any incompatibilities between zot releases that would preclude an in-place update, you can bring up a new zot registry with the desired release and then migrate from the existing registry.
To ensure a complete migration of the registry contents, set a polling interval in the configuration of the new zot registry and set prefix to **, as shown in this example:
zot supports a range of monitoring tools including logging, metrics, and benchmarking.
The following sections describe how to configure logging and monitoring with zot. You can use zot's benchmarking tool to test your configuration and deployment, as described in Benchmarking zot with zb.
zot supports a range of monitoring tools including logging, metrics, and benchmarking.
The following sections describe how to configure logging and monitoring with zot. You can use zot's benchmarking tool to test your configuration and deployment, as described in Benchmarking zot with zb.
Use zot's built-in profiling tools to collect and analyze runtime performance.
The profiling capabilities within zot allow a zot administrator to collect and export a range of diagnostic performance data such as CPU intensive function calls, memory allocations, and execution traces. The collected data can then be analyzed using Go tools and a variety of available visualization tools.
If authentication is enabled, only a zot admin user can access the APIs for profiling.
All examples in this article assume that the zot registry is running at localhost:8080.
The zot source code incorporates golang's pprof package of runtime analysis tools to collect data for the following performance-related profiles:
Profile
Description
allocs
A sampling of all past memory allocations.
block
Stack traces that led to blocking on synchronization primitives.
cmdline
The command line invocation of the current program.
goroutine
Stack traces of all current goroutines. Use debug=2 as a URL query parameter to export in the same format as an unrecovered panic.
heap
A sampling of memory allocations of live objects. You can specify the gc GET parameter to run GC before taking the heap sample.
mutex
Stack traces of holders of contended mutexes.
profile
CPU usage profile. You can specify the duration in the seconds URL query parameter. After receiving the profile file, use the go tool pprof command to investigate the profile.
threadcreate
Stack traces that led to the creation of new OS threads.
trace
A trace of execution of the current program. You can specify the duration in the seconds URL query parameter. After you get the trace file, use the go tool trace command to investigate the trace.
To return a current HTML-format profile list along with a count of currently available records for each profile, use the following API command:
Use zot's built-in profiling tools to collect and analyze runtime performance.
The profiling capabilities within zot allow a zot administrator to collect and export a range of diagnostic performance data such as CPU intensive function calls, memory allocations, and execution traces. The collected data can then be analyzed using Go tools and a variety of available visualization tools.
If authentication is enabled, only a zot admin user can access the APIs for profiling.
All examples in this article assume that the zot registry is running at localhost:8080.
The zot source code incorporates golang's pprof package of runtime analysis tools to collect data for the following performance-related profiles:
Profile
Description
allocs
A sampling of all past memory allocations.
block
Stack traces that led to blocking on synchronization primitives.
cmdline
The command line invocation of the current program.
goroutine
Stack traces of all current goroutines. Use debug=2 as a URL query parameter to export in the same format as an unrecovered panic.
heap
A sampling of memory allocations of live objects. You can specify the gc GET parameter to run GC before taking the heap sample.
mutex
Stack traces of holders of contended mutexes.
profile
CPU usage profile. You can specify the duration in the seconds URL query parameter. After receiving the profile file, use the go tool pprof command to investigate the profile.
threadcreate
Stack traces that led to the creation of new OS threads.
trace
A trace of execution of the current program. You can specify the duration in the seconds URL query parameter. After you get the trace file, use the go tool trace command to investigate the trace.
To return a current HTML-format profile list along with a count of currently available records for each profile, use the following API command:
/v2/_zot/pprof/
If authentication is enabled, only an admin user can access this API.
This command example creates an output data file named "cpu.prof".
The query parameter ?seconds=<number> specifies the number of seconds to gather the profile data. If this parameter is not specified, the default is 30 seconds.
In this example, the raw output data is redirected to a file named "cpu.prof". Alternatively, you can use curl -O to create a file with the default profile name (in this case, "profile"). If no output file is specified by either a cURL flag or an output redirection, the cURL command fails with "Failure writing output to destination".
The command output file is in a machine-readable format that can be interpreted by performance analyzers.
Analyzing the CPU usage profile using go tool pprof¶
Go's pprof package provides a variety of presentation formats for analyzing runtime performance.
When an HTTP port is specified as a command flag, the go tool pprof command installs and opens a local web server that provides a web interface for viewing and analyzing the profile data. This example opens a localhost page at port 9090 for viewing the CPU usage data captured in the profile file named "cpu.prof".
Retention policies are configured in the storage section of the zot configuration file under the retention attribute. One or more policies can be grouped under the policies attribute.
By default, if no retention policies are defined, all tags are retained.
If at least one keepTags policy is defined for a repository, all tags not matching those policies are removed. To avoid unintended removals, we recommend defining a default policy, as described in Configuration notes.
Retention policies are configured in the storage section of the zot configuration file under the retention attribute. One or more policies can be grouped under the policies attribute.
By default, if no retention policies are defined, all tags are retained.
If at least one keepTags policy is defined for a repository, all tags not matching those policies are removed. To avoid unintended removals, we recommend defining a default policy, as described in Configuration notes.
A cluster of zot instances can be easily scaled with no repo-specific intelligence in the load balancing scheme, using:
Stateless zot instances to simplify scale out
Shared remote storage
zot release v2.1.0 or later
Beginning with zot release v2.1.0, a new "scale-out" architecture greatly reduces the configuration required when deploying large numbers of zot instances. As before, multiple identical zot instances run simultaneously using the same shared reliable storage, but with improved scale and performance in large deployments. A highly scalable cluster can be architected by automatically sharding based on repository name so that each zot instance is responsible for a subset of repositories.
In a cloud deployment, the shared backend storage (such as AWS S3) and metadata storage (such as DynamoDB) can also be easily scaled along with the zot instances.
For high availability clustering with earlier zot releases, see zot Clustering.
Each repo is served by one zot replica, and that replica is solely responsible for serving all images of that repo. A repo in storage can be written to only by the zot replica responsible for that repo.
When a zot replica in the cluster receives an image push or pull request for a repo, the receiving replica hashes the repo path and consults a hash table to determine which replica is responsible for the repo.
If the hash indicates that another replica is responsible, the receiving replica forwards the request to the responsible replica and then acts as a proxy, returning the response to the requestor.
If the hash indicates that the current (receiving) replica is responsible, the request is handled locally.
For better resistance to collisions and preimage attacks, zot uses SipHash as the hashing algorithm.
Either of the following two schemes can be used to reach the cluster.
When a single entry point load balancer such as HAProxy is deployed, the number of zot replicas can easily be expanded by simply adding the IP addresses of the new replicas in the load balancer configuration.
When the load balancer receives an image push or pull request for a repo, it forwards the request to any replica in the cluster. No repo-specific programming of the load balancer is needed because the load balancer does not need to know which replica owns which repo. The replicas themselves can determine this.
Because the scale-out architecture greatly simplifies the role of the load balancer, it may be possible to eliminate the load balancer entirely. A scheme such as DNS-based routing can be implemented, exposing the zot replicas directly to the clients.
In these examples, clustering is supported by using multiple stateless zot replicas with shared S3 storage and an HAProxy (with sticky session) load balancer forwarding traffic to the replicas.
In the replica configuration, each replica must have a list of its peers configured in the "members" section of the JSON structure. This is a list of reachable addresses or hostnames. Each replica owns one of these addresses.
The replica must also have a hash key for hashing the repo path of the image request and a TLS certificate for authenticating with its peers.
Click here to view a sample cluster configuration for each replica. See the "cluster" section in the JSON structure.
A cluster of zot instances can be easily scaled with no repo-specific intelligence in the load balancing scheme, using:
Stateless zot instances to simplify scale out
Shared remote storage
zot release v2.1.0 or later
Beginning with zot release v2.1.0, a new "scale-out" architecture greatly reduces the configuration required when deploying large numbers of zot instances. As before, multiple identical zot instances run simultaneously using the same shared reliable storage, but with improved scale and performance in large deployments. A highly scalable cluster can be architected by automatically sharding based on repository name so that each zot instance is responsible for a subset of repositories.
In a cloud deployment, the shared backend storage (such as AWS S3) and metadata storage (such as DynamoDB) can also be easily scaled along with the zot instances.
For high availability clustering with earlier zot releases, see zot Clustering.
Each repo is served by one zot replica, and that replica is solely responsible for serving all images of that repo. A repo in storage can be written to only by the zot replica responsible for that repo.
When a zot replica in the cluster receives an image push or pull request for a repo, the receiving replica hashes the repo path and consults a hash table to determine which replica is responsible for the repo.
If the hash indicates that another replica is responsible, the receiving replica forwards the request to the responsible replica and then acts as a proxy, returning the response to the requestor.
If the hash indicates that the current (receiving) replica is responsible, the request is handled locally.
For better resistance to collisions and preimage attacks, zot uses SipHash as the hashing algorithm.
Either of the following two schemes can be used to reach the cluster.
When a single entry point load balancer such as HAProxy is deployed, the number of zot replicas can easily be expanded by simply adding the IP addresses of the new replicas in the load balancer configuration.
When the load balancer receives an image push or pull request for a repo, it forwards the request to any replica in the cluster. No repo-specific programming of the load balancer is needed because the load balancer does not need to know which replica owns which repo. The replicas themselves can determine this.
Because the scale-out architecture greatly simplifies the role of the load balancer, it may be possible to eliminate the load balancer entirely. A scheme such as DNS-based routing can be implemented, exposing the zot replicas directly to the clients.
In these examples, clustering is supported by using multiple stateless zot replicas with shared S3 storage and an HAProxy (with sticky session) load balancer forwarding traffic to the replicas.
In the replica configuration, each replica must have a list of its peers configured in the "members" section of the JSON structure. This is a list of reachable addresses or hostnames. Each replica owns one of these addresses.
The replica must also have a hash key for hashing the repo path of the image request and a TLS certificate for authenticating with its peers.
Click here to view a sample cluster configuration for each replica. See the "cluster" section in the JSON structure.
An overview of zot build-time and runtime security hardening features, including:
Build-time hardening such as PIE-mode builds
Minimal-build option for smaller attack surface
Open Source Security Foundation best practices for CI/CD
Non-root deployment
Robust authentication/authorization options
The zot project takes a defense-in-depth approach to security, applying industry-standard best practices at various stages. Recognizing that security hardening and product features are sometimes in conflict with each other, we also provide flexibility both at build and deployment time.
The zot binary is built with PIE build-mode enabled to take advantage of ASLR support in modern operating systems such as Linux ASLR. While zot is intended to be a long-running service (without frequent restarts), it prevents attackers from developing a generic attack that depends on predictable memory addresses across multiple zot deployments.
Functionality in zot is broadly organized as a coreDistribution Specification implementation and additional features as extensions. The rationale behind this approach is to minimize or control library dependencies that get included in the binary and consequently the attack surface.
We currently build and release two image flavors:
minimal, which is a minimal Distribution Specification conformant registry, and
full, which incorporates the minimal build and all extensions
The minimal flavor is for the security-minded and minimizes the number of dependencies and libraries. The full flavor is for the functionality-minded with the caveat that the attack surface of the binary is potentially broader. However by no means are these the only options. Our build (via the Makefile) provides the flexibility to pick and choose extensions in order to build a binary between minimal and full. For example,
make EXTENSIONS=search binary
produces a zot binary with only the search feature enabled.
zot is an open source project and all code submissions are open and transparent. Every pull request (PR) submitted to the project repository must be reviewed by the code owners. We have additional CI/CD workflows monitoring for unreviewed commits.
All PRs must pass the full CI/CD pipeline checks including unit, functional, and integration tests, code quality and style checks, and performance regressions. In addition, all binaries produced are subjected to further security scans to detect any known vulnerabilities.
All interactions with zot are over HTTP APIs, and htpasswd-based local authentication, LDAP, mutual TLS, and token-based authentication mechanisms are supported. We strongly recommend enabling a suitable mechanism for your deployment use case in order to prevent unauthorized access. See the provided authentication examples.
Following authentication, it is further possible to allow or deny actions by a user on a particular repository stored on the zot registry. See the provided access control examples.
We understand that no software is perfect and in spite of our best efforts, security bugs may be found. Refer to our security policy for taking a responsible course of action when reporting security bugs.
An overview of zot build-time and runtime security hardening features, including:
Build-time hardening such as PIE-mode builds
Minimal-build option for smaller attack surface
Open Source Security Foundation best practices for CI/CD
Non-root deployment
Robust authentication/authorization options
The zot project takes a defense-in-depth approach to security, applying industry-standard best practices at various stages. Recognizing that security hardening and product features are sometimes in conflict with each other, we also provide flexibility both at build and deployment time.
The zot binary is built with PIE build-mode enabled to take advantage of ASLR support in modern operating systems such as Linux ASLR. While zot is intended to be a long-running service (without frequent restarts), it prevents attackers from developing a generic attack that depends on predictable memory addresses across multiple zot deployments.
Functionality in zot is broadly organized as a coreDistribution Specification implementation and additional features as extensions. The rationale behind this approach is to minimize or control library dependencies that get included in the binary and consequently the attack surface.
We currently build and release two image flavors:
minimal, which is a minimal Distribution Specification conformant registry, and
full, which incorporates the minimal build and all extensions
The minimal flavor is for the security-minded and minimizes the number of dependencies and libraries. The full flavor is for the functionality-minded with the caveat that the attack surface of the binary is potentially broader. However by no means are these the only options. Our build (via the Makefile) provides the flexibility to pick and choose extensions in order to build a binary between minimal and full. For example,
make EXTENSIONS=search binary
produces a zot binary with only the search feature enabled.
zot is an open source project and all code submissions are open and transparent. Every pull request (PR) submitted to the project repository must be reviewed by the code owners. We have additional CI/CD workflows monitoring for unreviewed commits.
All PRs must pass the full CI/CD pipeline checks including unit, functional, and integration tests, code quality and style checks, and performance regressions. In addition, all binaries produced are subjected to further security scans to detect any known vulnerabilities.
All interactions with zot are over HTTP APIs, and htpasswd-based local authentication, LDAP, mutual TLS, and token-based authentication mechanisms are supported. We strongly recommend enabling a suitable mechanism for your deployment use case in order to prevent unauthorized access. See the provided authentication examples.
Following authentication, it is further possible to allow or deny actions by a user on a particular repository stored on the zot registry. See the provided access control examples.
We understand that no software is perfect and in spite of our best efforts, security bugs may be found. Refer to our security policy for taking a responsible course of action when reporting security bugs.
Data handling in zot revolves around two main principles: that data and APIs on the wire conform to the OCI Distribution Specification and that data on the disk conforms to the OCI Image Layout Specification. As a result, any client that is compliant with the Distribution Specification can read from or write to a zot registry. Furthermore, the actual storage is simply an OCI Image Layout. With only these two specification documents in hand, the entire data flow inside can be easily understood.
zot does not implement, support, or require any vendor-specific protocols, including that of Docker.
Because zot supports the OCI image layout, it can readily host and serve any directories holding a valid OCI image layout even when those directories have been created elsewhere. This property of zot is suitable for use cases in which container images are independently built, stored, and transferred, but later need to be served over the network.
Exposing flexibility in storage capabilities is a key tenet for catering to the requirements of varied environments ranging from cloud to on-premises to IoT.
Most modern filesystems buffer and flush RAM data to disk after a delay. The purpose of this function is to improve performance at the cost of higher disk memory usage. In embedded devices such as Raspberry Pi, for example, where RAM may be very limited and at a premium, it is desirable to flush data to disk more frequently. The zot storage configuration exposes an option called commit which, when enabled, causes data writes to be committed to disk immediately. This option is disabled by default.
Deduplication is a storage space saving feature wherein only a single copy of specific content is maintained on disk while many different image manifests may hold references to that same content. The deduplication option (dedupe) is also available for supported cloud storage backends.
Upon startup, zot enforces the dedupe status on the existing storage. If the dedupe status upon startup is true, zot deduplicates all blobs found in storage, both local and remote. If the status upon startup is false, zot restores cloud storage blobs to their original state. There is no need for zot to restore local filesystem storage if hard links are used.
After an image is deleted by deleting an image manifest, the corresponding blobs can be purged to free up space. However, since Distribution Specification APIs are not transactional between blob and manifest lifecycle, care must be taken so as not to put the storage in an inconsistent state. Garbage collection in zot is an inline feature meaning that it is not necessary to take the registry offline. See Configuring garbage collection for details.
The scrub function, available as an extension, makes it possible to ascertain data validity by computing hashes on blobs periodically and continuously so that any bit rot is caught and reported early.
zot can store and serve files from one or more local directories. A minimum of one root directory is required for local hosting, but additional hosted directories can be added. When accessed by HTTP APIs, all directories can appear as a single data store.
Remote filesystems that are mounted and accessible locally such as NFS or fuse are treated as local filesystems.