Skip to content

Releases: EnterpriseDB/kubectl-cnp

v1.15.1

31 May 15:01
988bdd3
Compare
Choose a tag to compare

Release date: May 27, 2022 (patch release)

Minor changes:

  • Enable configuration of the archive_timeout setting for PostgreSQL, which was previously a fixed parameter (by default set to 5 minutes)
  • Introduce a new field called backupOwnerReference in the scheduledBackup resource to set the ownership reference on the created backup resources, with possible values being none (default), self (objects owned by the scheduled backup object), and cluster (owned by the Postgres cluster object)
  • Introduce automated collection of pg_stat_wal metrics for PostgreSQL 14 or higher in the native Prometheus exporter
  • Set the default operand image to PostgreSQL 14.3

Fixes:

  • Fix fencing by killing orphaned processes related to postgres
  • Enable the CSV log pipe inside the WithActiveInstance function to collect logs from recovery bootstrap jobs and help in the troubleshooting phase
  • Prevent bootstrapping a new cluster with a non-empty backup object store, removing the risk of overwriting existing backups
  • With the recovery bootstrap method, make sure that the recovery object store and the backup object store are different to avoid overwriting existing backups
  • Re-queue the reconciliation loop if the RBAC for backups is not yet created
  • Fix an issue with backups and the wrong specification of the cluster name property
  • Ensures that operator pods always have the latest certificates in the case of a deployment of the operator in high availability, with more than one replica
  • Fix the cnp report operator command to correctly handle the case of a deployment of the operator in high availability, with more than one replica
  • Properly propagate changes in the cluster’s inheritedMetadata set of labels and annotations to the related resources of the cluster without requiring a restart
  • Fix the cnp plugin to correctly parse any custom configmap and secret name defined in the operator deployment, instead of relying just on the default values
  • Fix the local building of the documentation by using the minidocks/mkdocs image for mkdocs

v1.15.0

21 Apr 14:44
988bdd3
Compare
Choose a tag to compare

Release date: 21 April 2022

Features:

  • Fencing: Introduction of the fencing capability for a cluster or a given set of PostgreSQL instances through the k8s.enterprisedb.io/fencedInstances annotation, which, if not empty, disables switchover/failovers in the cluster; fenced instances are shut down and the pod is kept running (while considered not ready) for inspection and emergencies
  • LDAP authentication: Allow LDAP Simple Bind and Search+Bind configuration options in the pg_hba.conf to be defined in the Postgres cluster spec declaratively, enabling the optional use of Kubernetes secrets for sensitive options such as ldapbindpasswd
  • Introduction of the primaryUpdateMethod option, accepting the values of switchover (default) and restart, to be used in case of unsupervised primaryUpdateStrategy; this method controls what happens to the primary instance during the rolling update procedure
  • New report command in the kubectl cnp plugin for better diagnosis and more effective troubleshooting of both the operator and a specific Postgres cluster
  • Prune those Backup objects that are no longer in the backup object store
  • Specification of target timeline and LSN in Point-In-Time Recovery bootstrap method
  • Support for the AWS_SESSION_TOKEN authentication token in AWS S3 through the sessionToken option
  • Default image name for PgBouncer in Pooler pods set to quay.io/enterprisedb/pgbouncer:1.17.0

Fixes:

  • Base backup detection for Point-In-Time Recovery via targetTime correctly works now, as previously a target prior to the latest available backup was not possible (the detection algorithm was always wrong by selecting the last backup as a starting point)
  • Improved resilience of hot standby sensitive parameters by relying on the values the operator collects from pg_controldata
  • Control of hot standby sensitive parameters correctly works with EPAS instances now
  • Intermediate certificates handling has been improved by properly discarding invalid entries, instead of throwing an invalid certificate error
  • Prometheus exporter metric collection queries in the databases are now committed instead of rolled back (this might result in a change in the number of rolled back transactions that are visible from downstream dashboards, where applicable)

v1.14.0

25 Mar 17:55
988bdd3
Compare
Choose a tag to compare

Release date: 25 March 2022

Features:

  • Natively support Google Cloud Storage for backup and recovery, by taking advantage of the features introduced in Barman Cloud 2.19
  • Improved observability of backups through the introduction of the LastBackupSucceeded condition for the Cluster object
  • Support update of Hot Standby sensitive parameters: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes
  • Add the Online upgrade in progress phase in the Cluster object to show when an online upgrade of the operator is in progress
  • Ability to inherit an AWS IAM Role as an alternative way to provide credentials for the S3 object storage
  • Support for Opaque secrets for Pooler’s authQuerySecret and certificates
  • Updated default PostgreSQL version to 14.2
  • Add a new command to kubectl cnp plugin named maintenance to set maintenance window to cluster(s) in one or all namespaces across the Kubernetes cluster

Container Images:

  • Latest PostgreSQL and EPAS containers include Barman Cloud 2.19

Security Enhancements:

  • Stronger RBAC enforcement for namespaced operator installations with Operator Lifecycle Manager, including OpenShift. OpenShift users are recommended to update to this version.

Fixes:

  • Allow the instance manager to retry an interrupted pg_rewind by preserving a copy of the original pg_control file
  • Clean up stale PID files before running pg_rewind
  • Force sorting by key in primary_conninfo to avoid random restarts with PostgreSQL versions prior to 13
  • Preserve ServiceAccount changes (e.g., labels, annotations) upon reconciliation
  • Disable enforcement of the imagePullPolicy default value
  • Improve initDB validation for WAL segment size
  • Properly handle the targetLSN option when recovering a cluster with the LSN specified
  • Fix custom TLS certificates validation by allowing a certificates chain both in the server and CA certificates

v1.13.0

17 Feb 17:56
f4f7189
Compare
Choose a tag to compare

Release date: 17 February 2022

Features:

  • Support for Snappy compression. Snappy is a fast compression option for backups that increase the speed of uploads to the object store using a lower compression ratio
  • Support for tagging files uploaded to the Barman object store. This feature requires Barman 2.18 in the operand image. of backups after Cluster deletion
  • Extension of the status of a Cluster with status.conditions. The condition ContinuousArchiving indicates that the Cluster has started to archive WAL files
  • Improve the status command of the cnp plugin for kubectl with additional information: add a Cluster Summary section showing the status of the Cluster and a Certificates Status section including the status of the certificates used in the Cluster along with the time left to expire
  • Support the new barman-cloud-check-wal-archive command to detect a non-empty backup destination when creating a new cluster
  • Add support for using a Secret to add default monitoring queries through MONITORING_QUERIES_SECRET configuration variable.
  • Allow the user to restrict container’s permissions using AppArmor (on Kubernetes clusters deployed with AppArmor support)
  • Add Windows platform support to cnp plugin for kubectl, now the plugin is available on Windows x86 and ARM
  • Drop support for Kubernetes 1.18 and deprecated API versions

Container Images:

  • PostgreSQL containers include Barman 2.18

Security Fix:

  • Add coherence check of username field inside owner and superuser secrets; previously, a malicious user could have used the secrets to change the password of any PostgreSQL user

Fixes:

  • Fix a memory leak in code fetching status from Postgres pods
  • Disable PostgreSQL self-restart after a crash. The instance controller handles the lifecycle of the PostgreSQL instance
  • Prevent modification of spec.postgresUID and spec.postgresGID fields in validation webhook. Changing these fields after Cluster creation makes PostgreSQL unable to start
  • Reduce the log verbosity from the backup and WAL archiving handling code
  • Correct a bug resulting in a Cluster being marked as Healthy when not initialized yet
  • Allows standby servers in clusters with a very high WAL production rate to switch to streaming once they are aligned
  • Fix a race condition during the startup of a PostgreSQL pod that could seldom lead to a crash
  • Fix a race condition that could lead to a failure initializing the first PVC in a Cluster
  • Remove an extra restart of a just demoted primary Pod before joining the Cluster as a replica
  • Correctly handle replication-sensitive PostgreSQL configuration parameters when recovering from a backup
  • Fix missing validation of PostgreSQL configurations during Cluster creation

v1.12.0

11 Jan 16:25
f4f7189
Compare
Choose a tag to compare

Release date: 11 January 2022

Features:

  • Add Kubernetes 1.23 to the list of supported Kubernetes distributions and remove end-to-end tests for 1.17,
    which ended support by the Kubernetes project in Dec 2020
  • Improve the responsiveness of pod status checks in case of network issues by adding a connection timeout of 2 seconds and a communication timeout of 30 seconds. This change sets a limit on the time the operator waits for a pod to report its status before declaring it as failed, enhancing the robustness and predictability of a failover operation
  • Introduce the .spec.inheritedMetadata field to the Cluster allowing the user to specify labels and annotations that will apply to all objects generated by the Cluster
  • Reduce the number of queries executed when calculating the status of an instance
  • Add a readiness probe for PgBouncer
  • Add support for custom Certification Authority of the endpoint of Barman’s backup object store when using Azure protocol

Fixes:

  • During a failover, wait to select a new primary until all the WAL streaming connections are closed. The operator now sets by default wal_sender_timeout and wal_receiver_timeout to 5 seconds to make sure standby nodes will quickly notice if the primary has network issues
  • Change WAL archiving strategy in replica clusters to fix rolling updates by setting "archive_mode" to "always" for any PostgreSQL instance in a replica cluster. We then restrict the upload of the WAL only from the current and target designated primary. A WAL may be uploaded twice during switchovers, which is not an issue
  • Fix support for custom Certification Authority of the endpoint of Barman’s backup object store in replica clusters source
  • Use a fixed name for default monitoring config map in the cluster namespace
  • If the defaulting webhook is not working for any reason, the operator now updates the Cluster with the defaults also during the reconciliation cycle
  • Fix the comparison of resource requests and limits to fix a rare issue leading to an update of all the pods on every reconciliation cycle
  • Improve log messages from webhooks to also include the object namespace
  • Stop logging a “default” message at the start of every reconciliation loop
  • Stop logging a PodMonitor deletion on every reconciliation cycle if enablePodMonitor is false
  • Do not complain about possible architecture mismatch if a pod is not reachable

v1.11.0

15 Dec 19:53
f4f7189
Compare
Choose a tag to compare

Release date: 15 December 2021

Features:

  • Parallel WAL archiving and restore: allow the database to keep up with WAL generation on high write systems by introducing the backupObjectStore.maxParallel option to set the maximum number of parallel jobs to be executed during both WAL archiving (by PostgreSQL’s archive_command) and WAL restore (by restore_command). Using parallel restore option can allow newly promoted Standbys to get to a ready state faster by fetching needed WAL files to replay in parallel rather than sequentially
  • Default set of metrics for monitoring: a new ConfigMap called default-monitoring is automatically deployed in the same namespace of the operator and, by default, added to any existing Postgres cluster. Such behavior can be changed globally by setting the MONITORING_QUERIES_CONFIGMAP parameter in the operator’s configuration, or at cluster level through the .spec.monitoring.disableDefaultQueries option (by default set to false)
  • Introduce the enablePodMonitor option in the monitoring section of a cluster to automatically manage a PodMonitor resource and seamlessly integrate with Prometheus
  • Improve the PostgreSQL shutdown procedure by trying to execute a smart shutdown for the first half of the desired stopDelay time, and a fast shutdown for the remaining half, before the pod is killed by Kubernetes
  • Add the switchoverDelay option to control the time given to the former primary to shut down gracefully and archive all the WAL files before promoting the new primary (by default, Cloud Native PostgreSQL waits indefinitely to privilege data durability)
  • Handle changes to resource requests and limits for a PostgreSQL Cluster by issuing a rolling update
  • Improve the status command of the cnp plugin for kubectl with additional information: streaming replication status, total size of the database, role of an instance in the cluster
  • Enhance support of workloads with many parallel workers by enabling configuration of the dynamic_shared_memory_type and shared_memory_type parameters for PostgreSQL’s management of shared memory
  • Propagate labels and annotations defined at cluster level to the associated resources, including pods (deletions are not supported)
  • Automatically remove pods that have been evicted by the Kubelet
  • Manage automated resizing of persistent volumes in Azure through the ENABLE_AZURE_PVC_UPDATES operator configuration option, by issuing a rolling update of the cluster if needed (disabled by default)
  • Introduce thek8s.enterprisedb.io/reconciliationLoop annotation that, when set to disabled on a given Postgres cluster, prevents the reconciliation loop from running
  • Introduce the postInitApplicationSQL option as part of the initdb bootstrap method to specify a list of SQL queries to be executed on the main application database as a superuser immediately after the cluster has been created
  • Support for EDB Postgres Advanced 14.1

Fixes:

  • Liveness probe now correctly handles the startup process of a PostgreSQL server. This fixes an issue reported by a few customers and affects a restarted standby server that needs to recover WAL files to reach a consistent state, but it was not able to do it before the timeout of liveness probe would kick in, leaving the pods in CrashLoopBackOff status.
  • Liveness probe now correctly handles the case of a former primary that needs to use pg_rewind to re-align with the current primary after a timeline diversion. This fixes the pod of the new standby from repeatedly being killed by Kubernetes.
  • Reduce client-side throttling from Postgres pods (e.g. Waited for 1.182388649s due to client-side throttling, not priority and fairness, request: GET)
  • Disable Public Key Infrastructure (PKI) initialization on OpenShift and OLM installations, by using the provided one
  • When changing configuration parameters that require a restart, always leave the primary as last
  • Mark a PVC to be ready only after a job has been completed successfully, preventing a race condition in PVC initialization
  • Use the correct public key when renewing the expired webhook TLS secret.
  • Fix an overflow when parsing an LSN
  • Remove stale PID files at startup
  • Let the Pooler resource inherit the imagePullSecret defined in the operator, if exists

v1.10.0

11 Nov 17:53
f4f7189
Compare
Choose a tag to compare

Release date: 11 November 2021

Features:

  • Connection Pooling with PgBouncer: introduce the Pooler resource and controller to automatically manage a PgBouncer deployment to be used as a connection pooler for a local PostgreSQL Cluster. The feature includes TLS client/server connections, password authentication, High Availability, pod templates support, configuration of key PgBouncer parameters, PAUSE/RESUME, logging in JSON format, Prometheus exporter for stats, pools, and lists
  • Backup Retention Policies: support definition of recovery window retention policies for backups (e.g. ‘30d’ to ensure a recovery window of 30 days)
  • In-Place updates of the operator: introduce an in-place online update of the instance manager, which removes the need to perform a rolling update of the entire cluster following an update of the operator. By default this option is disabled (please refer to the documentation for more detailed information)
  • Limit the list of options that can be customized in the initdb bootstrap method to dataChecksums, encoding, localeCollate, localeCType, walSegmentSize. This makes the options array obsolete and planned to be removed in the v2 API
  • Introduce the postInitTemplateSQL option as part of the initdb bootstrap method to specify a list of SQL queries to be executed on the template1 database as a superuser immediately after the cluster has been created. This feature allows you to include default objects in all application databases created in the cluster
  • New default metrics added to the instance Prometheus exporter: Postgres version, cluster name, and first point of recoverability according to the backup catalog
  • Retry taking a backup after a failure
  • Build awareness about Barman Cloud capabilities in order to prevent the operator from invoking recently introduced features (such as retention policies, or Azure Blob Container storage) that are not present in operand images that are not frequently updated
  • Integrate the output of the status command of the cnp plugin with information about the backup
  • Introduce a new annotation that reports the status of a PVC (being initialized or ready)
  • Set the cluster name in the k8s.enterprisedb.io/cluster label for every object generated in a Cluster, including Backup objects
  • Drop support for deprecated API version postgresql.k8s.enterprisedb.io/v1alpha1 on the Cluster, Backup, and ScheduledBackup kinds
  • Set default operand image to PostgreSQL 14.1

Security:

  • Set allowPrivilegeEscalation to false for the operator containers securityContext

Fixes:

  • Disable primary PodDisruptionBudget during maintenance in single-instance clusters
  • Use the correct certificate certification authority (CA) during recovery operations
  • Prevent Postgres connection leaking when checking WAL archiving status before taking a backup
  • Let WAL archive/restore sleep for 100ms following transient errors that would flood logs otherwise

v1.9.2

15 Oct 20:31
f4f7189
Compare
Choose a tag to compare

Release date: 15 October 2021

Features:

  • Enhance JSON log with two new loggers: wal-archive for PostgreSQL's archive_command, and wal-restore for restore_command in a standby

Fixes:

  • Enable WAL archiving during the standby promotion (prevented .history files from being archived)
  • Pass the --cloud-provider option to Barman Cloud tools only when using Barman 2.13 or higher to avoid errors with older operands
  • Wait for the pod of the primary to be ready before triggering a backup

v1.9.1

30 Sep 17:42
f4f7189
Compare
Choose a tag to compare

Release date: 30 September 2021

This release is to celebrate the launch of PostgreSQL 14 by making it the default major version when a new Cluster is created without defining a specific image name.

Fixes:

  • Fix issue causing Error while getting barman endpoint CA secret message to appear in the logs of the primary pod, which prevented the backup to work correctly
  • Properly retry requesting a new backup in case of temporary communication issues with the instance manager

v1.9.0

28 Sep 13:51
f4f7189
Compare
Choose a tag to compare

Release date: 28 September 2021

Features:

  • Add Kubernetes 1.22 to the list of supported Kubernetes distributions, and remove 1.16
  • Introduce support for the --restore-target-wal option in pg_rewind, in order to fetch WAL files from the backup archive, if necessary (available only with PostgreSQL/EPAS 13+)
  • Expose a default metric for the Prometheus exporter that estimates the number of pages in the pg_catalog.pg_largeobject table in each database
  • Enhance the performance of WAL archiving and fetching, through local in-memory cache

Fixes:

  • Explicitly set the postgres user when invoking pg_isready - required by restricted SCC in OpenShift
  • Properly update the FirstRecoverabilityPoint in the status
  • Set archive_mode = always on the designated primary if backup is requested
  • Minor bug fixes