-
Notifications
You must be signed in to change notification settings - Fork 3
Backup on Shared Server
On Shared hosting such as webfraction.com
, we use pg_dump
rather than pg_dumpall
as we don’t have complete access to the database and the process of user creation is done via control panel. The purpose of this doc is to outline steps for setting up a pg_dump
method for backing up our database.
Its an easy set-up and is very much the same as that described on webfrabication's page
- Simple to set-up.
- Can be used to migrate data between servers or versions of Postgres, e.g. between live and development servers for debugging.
- Your restore goes to the point when you backed up. You can backup frequently so say every 30 mins, but if you then get corruption or system failure, it impossible to restore updates between backup and point of failure.
- You have to pre create user and roles before restoring the database.
The code below assumes you want to take snapshot every hour, you can adjust that by changing the crontab file entry. As this process may be used for situations other than eluminate
, I am using shell variables so that copy & paste of steps can be used in other cases too.
ssh into your hosting account such as opendcs
PUSER=eluminate
PDB=eluminate
PBACKUPDIR=~/PG_BACKUPS
PBIN=/usr/local/pgsql/bin
PPASS=secret
echo "*:*:*:${PUSER}:${PPASS}" >> ~/.pgpass
chmod 600 ~/.pgpass
Make sure you put the correct clear text password for your database user in PPASS. Now set-up a cronjob to take backups hourly.
mkdir -p ${PBACKUPDIR}
echo "0 * * * * ${PBIN}/pg_dump -U ${PUSER} ${PDB} > ${PBACKUPDIR}/pg-backup-\$(date +\\%Y\\%m\\%d\\%H\\%M\\%S)"
crontab -e
and copy and paste the output from echo into your crontab file. If you wanted backups say every 30 mins, then change the 0
at the start of the line to 0,30
If we want to safe guard against total system failure, then we need to consider copying these snapshots to another system, unless there is something better than nightly backup from the hosting company.
As a last step we need to do some house keeping to prevent these frequent backup files from filling up the disk space. Webfaction's shared host get file system backups daily and is kept for 10 days. So if we ensure that we keep some files to catch that backups, we can have 10 days works of recovery which is more that enough. So given we have a backup file every hour, we keep say 26 such files and delete anything older. So lets add another crontab entry
echo "20 * * * * date >> ${PBACKUPDIR}/OLD-BACKUPS-RM.log; ls -tr ${PBACKUPDIR}/pg-backup-* | head -n -26 | xargs --no-run-if-empty rm -v >> ${PBACKUPDIR}/OLD-BACKUPS-RM.log"
crontab -e
Like before copy & paste the output from echo command
To restore the database, assuming the same variables set-up have been done and assuming postgres is running
${PGBIN}/pg_restore -U ${PUSER} -d ${PDB} -f ${PGBACKUPDIR}/pg-backup-YYYYMMDDhhmmss
where pg-backup-YYYYMMDDhhmmss is either the most recent backup of a timestamp of earlier time in case you are recovering from some corruption that existed in the latest versions.