diff --git a/docs/how-to/h-configure-s3-radosgw.md b/docs/how-to/h-configure-s3-radosgw.md index d13a40d37b..d07d4b34ba 100644 --- a/docs/how-to/h-configure-s3-radosgw.md +++ b/docs/how-to/h-configure-s3-radosgw.md @@ -6,9 +6,14 @@ If you are using an earlier version, check the [Juju 3.0 Release Notes](https:// # Configure S3 for RadosGW -A Charmed PostgreSQL backup can be stored on any S3-compatible storage. S3 access and configurations are managed with the [s3-integrator charm](https://charmhub.io/s3-integrator). +A PostgreSQL backup can be stored on any S3-compatible storage. S3 access and configurations are managed with the [s3-integrator charm](https://charmhub.io/s3-integrator). -This guide will teach you how to deploy and configure the s3-integrator charm on Ceph via [RadosGW](https://docs.ceph.com/en/quincy/man/8/radosgw/), send the configuration to a Charmed PostgreSQL application, and update it. (To configure S3 for AWS, see [this guide](/t/9681)) +This guide will teach you how to deploy and configure the s3-integrator charm on Ceph via [RadosGW](https://docs.ceph.com/en/quincy/man/8/radosgw/), send the configuration to a Charmed PostgreSQL application, and update it. +> For AWS, see the guide [How to configure S3 for AWS](/t/9681) + +[note] +The Charmed PostgreSQL backup tool ([pgBackRest](https://pgbackrest.org/)) can currently only interact with S3-compatible storages if they work with [SSL/TLS](https://github.com/pgbackrest/pgbackrest/issues/2340) (backup via the plain HTTP is currently not supported). +[/note] ## Configure s3-integrator First, install the MinIO client and create a bucket: diff --git a/docs/how-to/h-create-backup.md b/docs/how-to/h-create-backup.md index 8d9325c1cf..de5a558953 100644 --- a/docs/how-to/h-create-backup.md +++ b/docs/how-to/h-create-backup.md @@ -9,9 +9,9 @@ If you are using an earlier version, check the [Juju 3.0 Release Notes](https:// This guide contains recommended steps and useful commands for creating and managing backups to ensure smooth restores. ## Prerequisites -* A cluster with at [least three nodes](/t/charmed-postgresql-how-to-manage-units/9689?channel=14/stable) deployed +* A cluster with at [least three nodes](/t/9689?channel=14/stable) deployed * Access to S3 storage -* [Configured settings for S3 storage](/t/charmed-postgresql-how-to-configure-s3/9681?channel=14/stable) +* [Configured settings for S3 storage](/t/9681?channel=14/stable) ## Summary - [Save your current cluster credentials](#heading--save-credentials), as you'll need them for restoring @@ -38,7 +38,7 @@ Once Charmed PostgreSQL is `active` and `idle`, you can create your first backup ```shell juju run postgresql/leader create-backup ``` -By default, backups created with command above will be **full** backups: a copy of *all* your data will be stored in S3. There are 2 other supported types of backups (available in revision 416+, currently in channel `14/edge` only): +By default, backups created with the command above will be **full** backups: a copy of *all* your data will be stored in S3. There are 2 other supported types of backups (available in revision 416+, currently in channel `14/edge` only): * Differential: Only modified files since the last full backup will be stored. * Incremental: Only modified files since the last successful backup (of any type) will be stored. @@ -48,8 +48,8 @@ juju run postgresql/leader create-backup type={full|differential|incremental} ``` **Tip**: To avoid unnecessary service downtime, always use non-primary units for the action `create-backup`. Keep in mind that: -* TLS enabled: disables the command from running on *primary units*. -* TLS **not** enabled: disables the command from running on *non-primary units*. +* When TLS is enabled, `create-backup` can only run on replicas (non-primary) +* When TLS is **not** enabled, `create-backup` can only run in the primary unit

List backups

You can list your available, failed, and in progress backups by running the `list-backups` command: diff --git a/docs/how-to/h-deploy-azure.md b/docs/how-to/h-deploy-azure.md new file mode 100644 index 0000000000..dd266a6916 --- /dev/null +++ b/docs/how-to/h-deploy-azure.md @@ -0,0 +1,347 @@ +# How to deploy on Azure + +[Azure](https://azure.com/) is a cloud computing platform developed by Microsoft. It has management, access and development of applications and services to individuals, companies, and governments through its global infrastructure. Access the Azure web console at [portal.azure.com](https://portal.azure.com/). + +## Summary +* [Set up Juju and Azure tooling](#set-up-juju-and-azure-tooling) + * [Install Juju and Azure CLI](#install-juju-and-azure-cli) + * [Authenticate](#authenticate) + * [Bootstrap Juju controller on Azure](#bootstrap-juju-controller) +* [Deploy charms](#deploy-charms) +* [Expose database (optional)](#expose-database-optional) +* [Clean up](#clean-up) + +--- + +## Set up Juju and Azure tooling +[note type="caution"] +**Warning**: The described `Azure interactive` method (with web browser authentication `service-principal-secret-via-browser`) described here is only supported starting Juju 3.6-rc1+! +[/note] +### Install Juju and Azure CLI +Install Juju via snap: +```shell +sudo snap install juju --channel 3.6/edge +``` + +Follow the installation guides for: +* [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt) - the Azure CLI for Linux + +To check they are all correctly installed, you can run the commands demonstrated below with sample outputs: + +```console +> juju version +3.6-rc1-genericlinux-amd64 + +> az --version +azure-cli 2.65.0 +core 2.65.0 +telemetry 1.1.0 + +Dependencies: +msal 1.31.0 +azure-mgmt-resource 23.1.1 +... + +Your CLI is up-to-date. +``` + +### Authenticate + +Please follow [the official Juju Azure documentation](https://juju.is/docs/juju/microsoft-azure) and check [the extra explanation about possible options](/t/15219). Choose the authentication method which fits you best. + +We are describing here the currently recommended `interactive` method with web browser authentication `service-principal-secret-via-browser`. This method does not require logging in with the Azure CLI locally, but it **requires an Azure subscription**. + +The first mandatory step is to [create an Azure subscription](https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription) - you will need the Azure subscription ID for Juju. + +Once you have it, add Azure credentials to Juju: +```none +juju add-credential azure +``` +This will start a script that will help you set up the credentials, where you will be asked to fill in a set of parameters: +* `credential-name`: Fill this with a sensible name that will help you identify the credential set, say `` +* `region`: Select any default region that is more convenient for you to deploy your controller and applications. Note that credentials are not region-specific. +* `auth type`: select `interactive`, which is the recommended way to authenticate to Azure using Juju +* `subscription_id`: Use the value `` from the Azure subscription created in the previous step. +* `application_name`: Generate a random string to avoid collision with other users or applications +* `role-definition-name`: Generate a random string to avoid collision with other users or applications, and store it as `` + +After prompting this information, you will be asked to authenticate the requests via web browser, as shown in the following example outputs: + +```shell +To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code to authenticate. +``` + +In the browser, open the [authentication page](https://microsoft.com/devicelogin) and enter the code `` provided in the output. + +You will be asked to authenticate twice, for allowing the creation of two different resources in Azure. + +If successful, you will see a confirmation that the credentials have been correctly added locally: + +```shell +Credential added locally for cloud "azure". +``` + +[details=Full sample output of `juju add-credential azure`] +```shell +> juju add-credential azure + +This operation can be applied to both a copy on this client and to the one on a controller. +No current controller was detected and there are no registered controllers on this client: either bootstrap one or register one. +Enter credential name: azure-test-credentials1 + +Regions + centralus + eastus + ... + +Select region [any region, credential is not region specific]: eastus + +Auth Types + interactive + service-principal-secret + managed-identity + +Select auth type [interactive]: interactive + +Enter subscription-id: [USE-YOUR-REAL-AZURE-SUBSCRIPTION-ID] + +Enter application-name (optional): azure-test-name1 + +Enter role-definition-name (optional): azure-test-role1 + +Note: your user account needs to have a role assignment to the +Azure Key Vault application (....). +You can do this from the Azure portal or using the az cli: + az ad sp create --id ... + +Initiating interactive authentication. + +To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code HIDDEN to authenticate. +To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code HIDDEN to authenticate. +Credential "azure-test-credentials1" added locally for cloud "azure". +``` +[/details] + +### Bootstrap Juju controller + +Once successfully completed, bootstrap the new Juju controller on Azure: +```shell +> juju bootstrap azure azure + +Creating Juju controller "azure" on azure/centralus +Looking for packaged Juju agent version 3.6-rc1 for amd64 +No packaged binary found, preparing local Juju agent binary +Launching controller instance(s) on azure/centralus... + - juju-aeb5ea-0 (arch=amd64 mem=3.5G cores=1) +Installing Juju agent on bootstrap instance +Waiting for address +Attempting to connect to 192.168.16.4:22 +Attempting to connect to 172.170.35.99:22 +Connected to 172.170.35.99 +Running machine configuration script... +Bootstrap agent now started +Contacting Juju controller at 192.168.16.4 to verify accessibility... + +Bootstrap complete, controller "azure" is now available +Controller machines are in the "controller" model + +Now you can run + juju add-model +to create a new model to deploy workloads. +``` + +You can check the [Azure instances availability](https://portal.azure.com/#browse/Microsoft.Compute%2FVirtualMachines): + +![image|689x313](upload://bB5lCMIHtL1KToftKQVv7z86aoi.png) + + +## Deploy charms + +Create a new Juju model if you don't have one already +```shell +juju add-model welcome +``` +> (Optional) Increase the debug level if you are troubleshooting charms: +> ```shell +> juju model-config logging-config='=INFO;unit=DEBUG' +> ``` + +The following command deploys PostgreSQL and [Data Integrator](https://charmhub.io/data-integrator), a charm that can be used to requests a test database: + +```shell +juju deploy postgresql +juju deploy data-integrator --config database-name=test123 +juju integrate postgresql data-integrator +``` +Check the status: +```shell +> juju status --relations + +Model Controller Cloud/Region Version SLA Timestamp +welcome azure azure/centralus 3.6-rc1.1 unsupported 12:56:16+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +data-integrator active 1 data-integrator latest/stable 41 no +postgresql 14.12 active 1 postgresql 14/stable 468 no + +Unit Workload Agent Machine Public address Ports Message +data-integrator/0* active idle 1 172.170.35.131 +postgresql/0* active idle 0 172.170.35.199 5432/tcp Primary + +Machine State Address Inst id Base AZ Message +0 started 172.170.35.199 juju-491ebe-0 ubuntu@22.04 +1 started 172.170.35.131 juju-491ebe-1 ubuntu@22.04 + +Integration provider Requirer Interface Type Message +data-integrator:data-integrator-peers data-integrator:data-integrator-peers data-integrator-peers peer +postgresql:database data-integrator:postgresql postgresql_client regular +postgresql:database-peers postgresql:database-peers postgresql_peers peer +postgresql:restart postgresql:restart rolling_op peer +postgresql:upgrade postgresql:upgrade upgrade peer +``` + +Once deployed, request the credentials for your newly bootstrapped PostgreSQL database: +```shell +juju run data-integrator/leader get-credentials +``` + +Example output: +```shell +postgresql: + data: '{"database": "test123", "external-node-connectivity": "true", "requested-secrets": + "[\"username\", \"password\", \"tls\", \"tls-ca\", \"uris\"]"}' + database: test123 + endpoints: 192.168.0.5:5432 + password: Jqi0QckCAADOFagl + uris: postgresql://relation-4:Jqi0QckCAADOFagl@192.168.0.5:5432/test123 + username: relation-4 + version: "14.12" +``` + +At this point, you can access your DB inside Azure VM using the internal IP address. All further Juju applications will use the database through the internal network: +```shell +> psql postgresql://relation-4:Jqi0QckCAADOFagl@192.168.0.5:5432/test123 + +psql (14.12 (Ubuntu 14.12-0ubuntu0.22.04.1)) +Type "help" for help. + +test123=> +``` + +From here you can begin to use your newly deployed PostgreSQL. Learn more about operations like scaling, enabling TLS, managing users and passwords, and more in the [Charmed PostgreSQL tutorial](/t/9707). + +## Expose database (optional) + +If it is necessary to access the database from outside of Azure, open the Azure firewall using the simple [juju expose](https://juju.is/docs/juju/juju-expose) functionality: +```shell +juju expose postgresql +``` +> Be wary that [opening ports to the public is risky](https://www.beyondtrust.com/blog/entry/what-is-an-open-port-what-are-the-security-implications). + +Once exposed, you can connect your database using the same credentials as above. This time use the Azure VM public IP assigned to the PostgreSQL instance. You can see this with `juju status`: +```shell +> juju status postgresql + +... +Model Controller Cloud/Region Version SLA Timestamp +welcome azure azure/centralus 3.6-rc1.1 unsupported 13:11:26+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +data-integrator active 1 data-integrator latest/stable 41 no +postgresql 14.12 active 1 postgresql 14/stable 468 yes + +Unit Workload Agent Machine Public address Ports Message +data-integrator/0* active idle 1 172.170.35.131 +postgresql/0* active idle 0 172.170.35.199 5432/tcp Primary + +Machine State Address Inst id Base AZ Message +0 started 172.170.35.199 juju-491ebe-0 ubuntu@22.04 +1 started 172.170.35.131 juju-491ebe-1 ubuntu@22.04 + +Integration provider Requirer Interface Type Message +data-integrator:data-integrator-peers data-integrator:data-integrator-peers data-integrator-peers peer +postgresql:database data-integrator:postgresql postgresql_client regular +postgresql:database-peers postgresql:database-peers postgresql_peers peer +postgresql:restart postgresql:restart rolling_op peer +postgresql:upgrade postgresql:upgrade upgrade peer +... +``` +Note the IP and port (`172.170.35.199:5432`) and connect via `psql`: +``` +> psql postgresql://relation-4:Jqi0QckCAADOFagl@172.170.35.199:5432/test123 + +psql (14.12 (Ubuntu 14.12-0ubuntu0.22.04.1)) +Type "help" for help. + +test123=> +``` +To close public access, run: +```shell +juju unexpose postgresql +``` + +## Clean up + +[note type="caution"] +Always clean Azure resources that are no longer necessary - they could be costly! +[/note] + +See all controllers in your machine with the following command: +``` +> juju controllers +... +Controller Model User Access Cloud/Region Models Nodes HA Version +azure* welcome admin superuser azure/centralus 2 1 none 3.6-rc1.1 +``` + +To destroy the `azure` Juju controller and remove the Azure instance, run the command below. **All your data will be permanently removed.** +```shell +juju destroy-controller azure --destroy-all-models --destroy-storage --force +``` + +Next, check and manually delete all unnecessary Azure VM instances and resources. To show the list of all your Azure VMs, run the following command (make sure no running resources are left): +```shell +az vm list +az resource list +``` + +List your Juju credentials: +```shell +> juju credentials + +... +Client Credentials: +Cloud Credentials +azure azure-test-name1 +... +``` +Remove Azure CLI credentials from Juju: +```shell +juju remove-credential azure azure-test-name1 +``` + +After deleting the credentials, the `interactive` process may still leave the role resource and its assignment hanging around. +We recommend you to check if these are still present with: + +```shell +az role definition list --name azure-test-role1 +``` +> Use it without specifying the `--name` argument to get the full list. + +You can also check whether you still have a role assignment bound to `azure-test-role1` registered using: + +```shell +az role assignment list --role azure-test-role1 +``` + +If this is the case, you can remove the role assignment first and then the role itself with the following commands: + +```shell +az role assignment delete --role azure-test-role1 +az role definition delete --name azure-test-role1 +``` + +Finally, log out of the Azure CLI user credentials to prevent any credential leakage: +```shell +az logout +``` \ No newline at end of file diff --git a/docs/how-to/h-deploy-multi-az.md b/docs/how-to/h-deploy-multi-az.md new file mode 100644 index 0000000000..5cb4ffeda2 --- /dev/null +++ b/docs/how-to/h-deploy-multi-az.md @@ -0,0 +1,188 @@ +# Deploy on multiple availability zones (AZ) + +During the deployment to hardware/VMs, it is important to spread all the +database copies (Juju units) to different hardware servers, +or even better, to the different [availability zones](https://en.wikipedia.org/wiki/Availability_zone) (AZ). This will guarantee no shared service-critical components across the DB cluster (eliminate the case with all eggs in the same basket). + +This guide will take you through deploying a PostgreSQL cluster on GCE using 3 available zones. All Juju units will be set up to sit in their dedicated zones only, which effectively guarantees database copy survival across all available AZs. + +[note] +This documentation assumes that your cloud supports and provides availability zones concepts. This is enabled by default on EC2/GCE and supported by LXD/MicroCloud. + +See the [Additional resources](#additional-resources) section for more details about AZ on specific clouds. +[/note] + +## Summary +* [Set up GCE on Google Cloud](#set-up-gce-on-google-cloud) +* [Deploy PostgreSQL with Juju zones constraints](#deploy-postgresql-with-juju-zones-constraints) + * [Simulation: A node gets drained](#simulation-a-node-gets-drained) +* [Additional resources](#additional-resources) +--- + +## Set up GCE on Google Cloud + +Let's deploy the [PostgreSQL Cluster on GKE (us-east4)](/t/11237) using all 3 zones there (`us-east4-a`, `us-east4-b`, `us-east4-c`) and make sure all pods always sits in the dedicated zones only. + +[note type="caution"] +**Warning**: Creating the following GKE resources may cost you money - be sure to monitor your GCloud costs. +[/note] + +Log into Google Cloud and [bootstrap GCE on Google Cloud](/t/15722): +```shell +gcloud auth login +gcloud iam service-accounts keys create sa-private-key.json --iam-account=juju-gce-account@[your-gcloud-project-12345].iam.gserviceaccount.com +sudo mv sa-private-key.json /var/snap/juju/common/sa-private-key.json +sudo chmod a+r /var/snap/juju/common/sa-private-key.json + +juju add-credential google +juju bootstrap google gce +juju add-model mymodel +``` + +## Deploy PostgreSQL with Juju zones constraints + +Juju provides the support for availability zones using **constraints**. Read more about zones in [Juju documentation](https://juju.is/docs/juju/constraint#heading--zones). + +The command below demonstrates how Juju automatically deploys Charmed PostgreSQL VM using [Juju constraints](https://juju.is/docs/juju/constraint#heading--zones): + +```shell +juju deploy postgresql -n 3 \ + --constraints zones=us-east1-b,us-east1-c,us-east1-d +``` + +After a successful deployment, `juju status` will show an active application: +```shell +Model Controller Cloud/Region Version SLA Timestamp +mymodel gce google/us-east1 3.5.4 unsupported 00:16:52+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +postgresql 14.12 active 3 postgresql 14/stable 468 no + +Unit Workload Agent Machine Public address Ports Message +postgresql/0 active idle 0 34.148.44.51 5432/tcp +postgresql/1 active idle 1 34.23.202.220 5432/tcp +postgresql/2* active idle 2 34.138.167.85 5432/tcp Primary + +Machine State Address Inst id Base AZ Message +0 started 34.148.44.51 juju-e7c0db-0 ubuntu@22.04 us-east1-d RUNNING +1 started 34.23.202.220 juju-e7c0db-1 ubuntu@22.04 us-east1-c RUNNING +2 started 34.138.167.85 juju-e7c0db-2 ubuntu@22.04 us-east1-b RUNNING +``` + +and each unit/vm will sit in the separate AZ out of the box: +```shell +> gcloud compute instances list +NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS +juju-a82dd9-0 us-east1-b n1-highcpu-4 10.142.0.30 34.23.252.144 RUNNING # Juju Controller +juju-e7c0db-2 us-east1-b n2-highcpu-2 10.142.0.32 34.138.167.85 RUNNING # postgresql/2 +juju-e7c0db-1 us-east1-c n2-highcpu-2 10.142.0.33 34.23.202.220 RUNNING # postgresql/1 +juju-e7c0db-0 us-east1-d n2-highcpu-2 10.142.0.31 34.148.44.51 RUNNING # postgresql/0 +``` + +### Simulation: A node gets lost +Let's destroy a GCE node and recreate it using the same AZ: +```shell +> gcloud compute instances delete juju-e7c0db-1 +No zone specified. Using zone [us-east1-c] for instance: [juju-e7c0db-1]. +The following instances will be deleted. Any attached disks configured to be auto-deleted will be deleted unless they are attached to any other instances or the `--keep-disks` flag is given and specifies them for keeping. Deleting a disk is +irreversible and any data on the disk will be lost. + - [juju-e7c0db-1] in [us-east1-c] + +Do you want to continue (Y/n)? Y + +Deleted [https://www.googleapis.com/compute/v1/projects/data-platform-testing-354909/zones/us-east1-c/instances/juju-e7c0db-1]. +``` + +```shell +Model Controller Cloud/Region Version SLA Timestamp +mymodel gce google/us-east1 3.5.4 unsupported 00:25:14+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +postgresql 14.12 active 2/3 postgresql 14/stable 468 no + +Unit Workload Agent Machine Public address Ports Message +postgresql/0 active idle 0 34.148.44.51 5432/tcp +postgresql/1 unknown lost 1 34.23.202.220 5432/tcp agent lost, see 'juju show-status-log postgresql/1' +postgresql/2* active idle 2 34.138.167.85 5432/tcp Primary + +Machine State Address Inst id Base AZ Message +0 started 34.148.44.51 juju-e7c0db-0 ubuntu@22.04 us-east1-d RUNNING +1 down 34.23.202.220 juju-e7c0db-1 ubuntu@22.04 us-east1-c RUNNING +2 started 34.138.167.85 juju-e7c0db-2 ubuntu@22.04 us-east1-b RUNNING +``` + +Here we should remove the no-longer available `server/vm/GCE` node and add a new one. Juju will create it in the same AZ `us-east4-c`: +```shell +> juju remove-unit postgresql/1 --force --no-wait +WARNING This command will perform the following actions: +will remove unit postgresql/1 + +Continue [y/N]? y +``` + +The command `juju status` shows the machines in a healthy state, but PostgreSQL HA recovery is necessary: +```shell +Model Controller Cloud/Region Version SLA Timestamp +mymodel gce google/us-east1 3.5.4 unsupported 00:30:09+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +postgresql 14.12 active 2 postgresql 14/stable 468 no + +Unit Workload Agent Machine Public address Ports Message +postgresql/0 active idle 0 34.148.44.51 5432/tcp +postgresql/2* active idle 2 34.138.167.85 5432/tcp Primary + +Machine State Address Inst id Base AZ Message +0 started 34.148.44.51 juju-e7c0db-0 ubuntu@22.04 us-east1-d RUNNING +2 started 34.138.167.85 juju-e7c0db-2 ubuntu@22.04 us-east1-b RUNNING +``` + +Request Juju to add a new unit in the proper AZ: +```shell +juju add-unit postgresql -n 1 +``` + +Juju uses the right AZ where the node is missing. Run `juju status`: +```shell +Model Controller Cloud/Region Version SLA Timestamp +mymodel gce google/us-east1 3.5.4 unsupported 00:30:42+02:00 + +App Version Status Scale Charm Channel Rev Exposed Message +postgresql active 2/3 postgresql 14/stable 468 no + +Unit Workload Agent Machine Public address Ports Message +postgresql/0 active idle 0 34.148.44.51 5432/tcp +postgresql/2* active idle 2 34.138.167.85 5432/tcp Primary +postgresql/3 waiting allocating 3 waiting for machine + +Machine State Address Inst id Base AZ Message +0 started 34.148.44.51 juju-e7c0db-0 ubuntu@22.04 us-east1-d RUNNING +2 started 34.138.167.85 juju-e7c0db-2 ubuntu@22.04 us-east1-b RUNNING +3 pending juju-e7c0db-3 ubuntu@22.04 us-east1-c starting +``` + +## Remove GCE setup + +[note type="caution"] +**Warning**: Do not forget to remove your test setup - it can be costly! +[/note] + +Check the list of currently running GCE instances: +```shell +> gcloud compute instances list +NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS +juju-a82dd9-0 us-east1-b n1-highcpu-4 10.142.0.30 34.23.252.144 RUNNING +juju-e7c0db-2 us-east1-b n2-highcpu-2 10.142.0.32 34.138.167.85 RUNNING +juju-e7c0db-3 us-east1-c n2d-highcpu-2 10.142.0.34 34.23.202.220 RUNNING +juju-e7c0db-0 us-east1-d n2-highcpu-2 10.142.0.31 34.148.44.51 RUNNING +``` + +Request Juju to clean all GCE resources: +```shell +juju destroy-controller gce --no-prompt --force --destroy-all-models +``` + +Re-check that there are no running GCE instances left (it should be empty): +```shell +gcloud compute instances list +``` \ No newline at end of file diff --git a/docs/how-to/h-enable-alert-rules.md b/docs/how-to/h-enable-alert-rules.md index 5f034055d7..52638adcb8 100644 --- a/docs/how-to/h-enable-alert-rules.md +++ b/docs/how-to/h-enable-alert-rules.md @@ -2,7 +2,7 @@ This guide will show how to set up [Pushover](https://pushover.net/) to receive alert notifications from the COS Alert Manager with [Awesome Alert Rules](https://samber.github.io/awesome-prometheus-alerts/). -Charmed PostgreSQL VM ships a pre-configured and pre-enabled [list of Awesome Alert Rules](https://github.com/canonical/postgresql-operator/tree/main/src/prometheus_alert_rules). +Charmed PostgreSQL VM ships a pre-configured and pre-enabled [list of Awesome Alert Rules].
Screenshot of alert rules in the Grafana web interface @@ -73,4 +73,5 @@ Do you have questions? [Contact us]! [Contact us]: /t/11852 [Charmed PostgreSQL VM operator]: /t/9697 -[COS Monitoring]: /t/10600 \ No newline at end of file +[COS Monitoring]: /t/10600 +[list of Awesome Alert Rules]: /t/15841 \ No newline at end of file diff --git a/docs/how-to/h-integrate.md b/docs/how-to/h-integrate.md index 10e07bc872..54a9761380 100644 --- a/docs/how-to/h-integrate.md +++ b/docs/how-to/h-integrate.md @@ -30,7 +30,7 @@ Integrations with charmed applications are supported via the modern [`postgresql ### Modern `postgresql_client` interface To integrate with a charmed application that supports the `postgresql_client` interface, run ```shell -juju integrate postgresql +juju integrate postgresql:database ``` To remove the integration, run diff --git a/docs/how-to/h-restore-backup.md b/docs/how-to/h-restore-backup.md index c1bbb00c07..1807f66571 100644 --- a/docs/how-to/h-restore-backup.md +++ b/docs/how-to/h-restore-backup.md @@ -78,4 +78,6 @@ However, if the user needs to restore to a specific point in time between differ juju run postgresql/leader restore restore-to-time="YYYY-MM-DDTHH:MM:SSZ" ``` -Your restore will then be in progress. \ No newline at end of file +Your restore will then be in progress. + +It’s also possible to restore to the latest point from a specific timeline by passing the ID of a backup taken on that timeline and `restore-to-time=latest` when requesting a restore. \ No newline at end of file diff --git a/docs/how-to/h-rollback-minor.md b/docs/how-to/h-rollback-minor.md index 5989080da7..e7d7754ff5 100644 --- a/docs/how-to/h-rollback-minor.md +++ b/docs/how-to/h-rollback-minor.md @@ -10,14 +10,13 @@ If you are using an earlier version, check the [Juju 3.0 Release Notes](https:// After a `juju refresh`, if there are any version incompatibilities in charm revisions, its dependencies, or any other unexpected failure in the upgrade process, the process will be halted and enter a failure state. -Even if the underlying PostgreSQL cluster continues to work, it’s important to roll back the charm to -a previous revision so that an update can be attempted after further inspection of the failure. +Even if the underlying PostgreSQL cluster continues to work, it’s important to roll back the charm to a previous revision so that an update can be attempted after further inspection of the failure. [note type="caution"] **Warning:** Do NOT trigger `rollback` during the running `upgrade` action! It may cause an unpredictable PostgreSQL cluster state! [/note] -## Summary +## Summary of the rollback steps 1. **Prepare** the Charmed PostgreSQL VM application for the in-place rollback. 2. **Rollback**. Once started, all units in a cluster will be executed sequentially. The rollback will be aborted (paused) if the unit rollback has failed. 3. **Check**. Make sure the charm and cluster are in a healthy state again. @@ -26,7 +25,7 @@ a previous revision so that an update can be attempted after further inspection To execute a rollback, we use a similar procedure to the upgrade. The difference is the charm revision to upgrade to. In this guide's example, we will refresh the charm back to revision `182`. -It is necessary to re-run `pre-upgrade-check` action on the leader unit, to enter the upgrade recovery state: +It is necessary to re-run `pre-upgrade-check` action on the leader unit in order to enter the upgrade recovery state: ```shell juju run postgresql/leader pre-upgrade-check ``` @@ -38,16 +37,16 @@ When using a charm from charmhub: juju refresh postgresql --revision=182 ``` -When deploying from a local charm file, one must have the previous revision charm file and run: - -``` +When deploying from a local charm file, one must have the previous revision charm file and run the following command: +```shell juju refresh postgresql --path=./postgresql_ubuntu-22.04-amd64.charm ``` - -Where `postgresql_ubuntu-22.04-amd64.charm` is the previous revision charm file. +> where `postgresql_ubuntu-22.04-amd64.charm` is the previous revision charm file. The first unit will be rolled out and should rejoin the cluster after settling down. After the refresh command, the juju controller revision for the application will be back in sync with the running Charmed PostgreSQL revision. ## Step 3: Check -Future [improvements are planned](https://warthogs.atlassian.net/browse/DPE-2621) to check the state on pods/clusters on a low level. At the moment check `juju status` to make sure the cluster [state](/t/10844) is OK. \ No newline at end of file +Future [improvements are planned](https://warthogs.atlassian.net/browse/DPE-2621) to check the state on pods/clusters on a low level. + +For now, check `juju status` to make sure the cluster [state](/t/10844) is OK. \ No newline at end of file diff --git a/docs/how-to/h-scale.md b/docs/how-to/h-scale.md index a2abf2319a..ec2a0e08e2 100644 --- a/docs/how-to/h-scale.md +++ b/docs/how-to/h-scale.md @@ -6,7 +6,7 @@ If you are using an earlier version, check the [Juju 3.0 Release Notes](https:// # How to scale units -Replication in PostgreSQL is the process of creating copies of the stored data. This provides redundancy, which means the application can provide self-healing capabilities in case one replica fails. In this context, each replica is equivalent one juju unit. +Replication in PostgreSQL is the process of creating copies of the stored data. This provides redundancy, which means the application can provide self-healing capabilities in case one replica fails. In this context, each replica is equivalent to one juju unit. This guide will show you how to establish and change the amount of juju units used to replicate your data. @@ -16,6 +16,7 @@ To deploy PostgreSQL with multiple replicas, specify the number of desired units ```shell juju deploy postgresql --channel 14/stable -n ``` +> It is recommended to use an odd number to prevent a [split-brain](https://en.wikipedia.org/wiki/Split-brain_(computing) scenario. ### Primary vs. leader unit @@ -27,6 +28,7 @@ To retrieve the juju unit that corresponds to the PostgreSQL primary, use the ac ```shell juju run postgresql/leader get-primary ``` + Similarly, the primary replica is displayed as a status message in `juju status`. However, one should note that this hook gets called on regular time intervals and the primary may be outdated if the status hook has not been called recently. [note] diff --git a/docs/how-to/h-upgrade-minor.md b/docs/how-to/h-upgrade-minor.md index ffcf34ff92..ec1ea6402d 100644 --- a/docs/how-to/h-upgrade-minor.md +++ b/docs/how-to/h-upgrade-minor.md @@ -7,9 +7,9 @@ If you are using an earlier version, check the [Juju 3.0 Release Notes](https:// # Perform a minor upgrade **Example**: PostgreSQL 14.8 -> PostgreSQL 14.9
-(including simple charm revision bump: from revision 193 to revision 196). +(including charm revision bump: e.g. Revision 193 -> Revision 196) -This guide is part of [Charmed PostgreSQL Upgrades](/t/12086). Please refer to this page for more information and an overview of the content. +This guide is part of [Charmed PostgreSQL Upgrades](/t/12086). Refer to this page for more information and an overview of the content. ## Summary - [**Pre-upgrade checks**](#pre-upgrade-checks): Important information to consider before starting an upgrade. @@ -39,6 +39,7 @@ Some examples are operations like (but not limited to) the following: * Upgrading other connected/related/integrated applications simultaneously Concurrency with other operations is not supported, and it can lead the cluster into inconsistent states. + ### Backups **Make sure to have a backup of your data when running any type of upgrade.** @@ -57,7 +58,7 @@ This step is only valid when deploying from [charmhub](https://charmhub.io/). If a [local charm](https://juju.is/docs/sdk/deploy-a-charm) is deployed (revision is small, e.g. 0-10), make sure the proper/current local revision of the `.charm` file is available BEFORE going further. You might need it for a rollback. [/note] -The first step is to record the revision of the running application as a safety measure for a rollback action. To accomplish this, simply run the `juju status` command and look for the deployed Charmed PostgreSQL revision in the command output, e.g.: +The first step is to record the revision of the running application as a safety measure for a rollback action. To accomplish this, run the `juju status` command and look for the deployed Charmed PostgreSQL revision in the command output, e.g.: ```shell Model Controller Cloud/Region Version SLA Timestamp @@ -115,7 +116,7 @@ All units will be refreshed (i.e. receive new charm content), and the upgrade wi First the `replica` units, then the `sync-standby` units, and lastly, the `leader`(or `primary`) unit. [/note] - `juju status` will look like: + `juju status` will look like similar to the output below: ```shell Model Controller Cloud/Region Version SLA Timestamp @@ -170,7 +171,9 @@ After a `juju refresh`, if there are any version incompatibilities in charm revi The step must be skipped if the upgrade went well! -Although the underlying PostgreSQL Cluster continues to work, it’s important to roll back the charm to a previous revision so that an update can be attempted after further inspection of the failure. Please switch to the dedicated [minor rollback](/t/12090) tutorial if necessary. +Although the underlying PostgreSQL Cluster continues to work, it’s important to roll back the charm to a previous revision so that an update can be attempted after further inspection of the failure. + +> See: [How to perform a minor rollback](/t/12090) ## Post-upgrade check diff --git a/docs/overview.md b/docs/overview.md index b2a68da5e3..69b9e37c86 100644 --- a/docs/overview.md +++ b/docs/overview.md @@ -1,3 +1,5 @@ +> This is a **IAAS/VM** operator. To deploy on Kubernetes, see [Charmed PostgreSQL K8s](https://charmhub.io/postgresql-k8s). + # Charmed PostgreSQL documentation Charmed PostgreSQL is an open-source software operator designed to deploy and operate object-relational databases on IAAS/VM. It packages the powerful database management system [PostgreSQL](https://www.postgresql.org/) into a charmed operator for deployment with [Juju](https://juju.is/docs/juju). @@ -7,12 +9,6 @@ This charm offers automated operations management from day 0 to day 2. It is equ Charmed PostgreSQL meets the need of deploying PostgreSQL in a structured and consistent manner while providing flexibility in configuration. It simplifies deployment, scaling, configuration and management of relational databases in large-scale production environments reliably. This charmed operator is made for anyone looking for a comprehensive database management interface, whether for operating a complex production environment or simply as a playground to learn more about databases and charms. - -[note] -This operator is built for **IAAS/VM**. - -For deployments in **Kubernetes** environments, see [Charmed PostgreSQL K8s](https://charmhub.io/postgresql-k8s). -[/note] +[info]: https://img.shields.io/badge/info-blue +[warning]: https://img.shields.io/badge/warning-yellow +[critical]: https://img.shields.io/badge/critical-red \ No newline at end of file diff --git a/docs/explanation/e-statuses.md b/docs/reference/r-statuses.md similarity index 84% rename from docs/explanation/e-statuses.md rename to docs/reference/r-statuses.md index a35965e00f..20dc754ba7 100644 --- a/docs/explanation/e-statuses.md +++ b/docs/reference/r-statuses.md @@ -1,6 +1,6 @@ -# Charm Statuses Explanations +# Charm statuses -> :warning: **WARNING** : it is an work-in-progress article. Do NOT use it in production! Contact [Canonical Data Platform team](https://chat.charmhub.io/charmhub/channels/data-platform) if you are interested in the topic. +> :warning: **WARNING** : This is an work-in-progress article. Do NOT use it in production! Contact [Canonical Data Platform team](https://chat.charmhub.io/charmhub/channels/data-platform) if you are interested in the topic. The charm follows [standard Juju applications statuses](https://juju.is/docs/olm/status-values#heading--application-status). Here you can find the expected end-users reaction on different statuses: @@ -17,6 +17,7 @@ The charm follows [standard Juju applications statuses](https://juju.is/docs/olm | **blocked** | failed to start Patroni | TODO: error/retry? | | | **blocked** | Failed to create postgres user | The charm couldn't create the default `postgres` database user due to connection problems | Connect to the database using the `operator` user and the password from the `get-password` action, then run `CREATE ROLE postgres WITH LOGIN SUPERUSER;` | | **blocked** | Failed to restore backup | The database couldn't start after the restore | The charm needs fix in the code to recover from this status and enable a new restore to be requested | +| **blocked** | Please choose one endpoint to use. No need to relate all of them simultaneously! | [The modern / legacy interfaces](https://charmhub.io/postgresql/docs/e-legacy-charm) should not be used simultaneously. | Remove modern or legacy relation. Choose one to use at a time. | | **error** | any | An unhanded internal error happened | Read the message hint. Execute `juju resolve ` after addressing the root of the error state | | **terminated** | any | The unit is gone and will be cleaned by Juju soon | No actions possible | | **unknown** | any | Juju doesn't know the charm app/unit status. Possible reason: K8s charm termination in progress. | Manual investigation required if status is permanent | \ No newline at end of file