==============================
This repository has been evolved from our-postgresql-setup. Thanks @CrisSinjo / GoCardless.
It helps you quickly spin up a 3-node cluster of PostgreSQL, managed by Pacemaker, and proxied by PgBouncer/PgPool
When you start the cluster, you get 3 nodes, each running:
- PostgreSQL
- Pacemaker
- PgBouncer
- PgPool
- PgAgent
Codes tested in Centos 7 (PostgreSQL 10) and Ubuntu 16.04 (PostgreSQL 9.4) .
The cluster is configured with a single primary, one synchronous replica, and one asynchronous replica.
- Virtualbox
- Vagrant
git clone https://github.com/gocardless/our-postgresql-setup.git
- [Recommended] tmux
./tmux-session.sh start
- On 3 separate windows:
vagrant up pg01 && vagrant ssh pg01
vagrant up pg02 && vagrant ssh pg02
vagrant up pg03 && vagrant ssh pg03
You can run crm_mon -Afr
on any node to see the current state of the cluster and all resources in it. Press ^c
to quit.
Once the cluster is up, you have two options:
- Connect directly to Postgres on the PostgresqlVIP at 172.28.33.10
- Connect via PgBouncer at 172.28.33.9
Note: The migrator.py script will only give you zero-downtime migrations if you connect via PgBouncer.
- Ensure clients are connected to the PgBouncerVIP.
- Run
/vagrant/migrator.py
on the node that has the PgBouncerVIP (you can find out where the PgBouncerVIP is by viewing the cluster status). - Follow the prompts.
- It is safe to ignore the
Make sure you have the following command ready...
prompt. This is aimed at cases where you'd want to quickly re-enable traffic, and doesn't matter when running locally.
- It is safe to ignore the
- Assuming everything went well, the primary will migrate to the synchronous replica, and the clients won't have received any connection resets.