A Practical Guide to PostgreSQL Backups

You don’t have a backup until you've tested a restore. Most developers learn this the hard way. The feeling of a production database going sideways is a unique kind of terror. But preventing it is simpler than you think. You don’t need complex tools or expensive services to get started.

You need a reliable process for backing up your data and more importantly for restoring it.

The Right Tool for the Job

For most applications built on PostgreSQL the right tool is the one that’s already there pg_dump. It’s a command line utility that creates a logical backup of your database. A logical backup means it extracts the data into a file of SQL commands or a custom archive format. It’s not a physical copy of the database files on disk.

This is a good thing. Logical backups are more flexible. You can restore them on different machine architectures and different PostgreSQL versions. For the vast majority of startups and projects this is exactly what you want.

Creating Your First Backup

Let’s make a backup. The pg_dump command has many options but you only need a few to create a robust backup file. The most important choice is the format. Using the custom format (-F c) is best because it’s compressed and allows for more flexibility during restore.

Here is the command you’ll use.

# -U specifies the database user
# -h specifies the host we are connecting to
# -d specifies the database name
# -F c specifies the custom compressed format
# -f specifies the output file name

pg_dump -U your_user -h localhost -d your_database -F c -f your_database.dump

Run this and you will have a single file your_database.dump. This file contains everything needed to recreate the database’s schema and data. Keep this file somewhere safe. Ideally you would copy it to a separate machine or a cloud storage service like S3. A backup on the same server that might fail is not a real backup.

Restoring From Your Backup

A backup is useless if you can’t restore it. The tool for this is pg_restore. It’s designed to read the archive files created by pg_dump.

Before you restore you’ll need a database to restore into. It can be the original one or a new empty one. pg_restore is smart enough to recreate the tables and other objects.

Here is the command to restore your data.

# -U specifies the database user
# -h specifies the host we are connecting to
# -d specifies the target database to restore into
# --clean tells pg_restore to drop database objects before recreating them
# --if-exists is used with --clean to avoid errors if objects don’t exist

pg_restore -U your_user -h localhost -d your_new_database --clean --if-exists your_database.dump

The --clean and --if-exists flags are helpful. They ensure the restore process starts with a clean slate inside the target database which prevents errors if you are restoring over an existing database. You should practice this restore process. Do it once a month. Make sure it works. Time yourself. You want to be calm and practiced when a real emergency happens.

Automating Your Backups

Doing this manually is a good start but you will forget. Backups need to be automated. The simplest way to do this on a Linux server is with a shell script and a cron job.

First create a directory to store your backups.

mkdir -p /var/backups/postgresql

Now create a simple shell script. Let’s call it backup_script.sh.

#!/bin/bash

# Exit immediately if a command exits with a non-zero status.
set -e

# Database credentials
DB_USER=“your_user”
DB_NAME=“your_database”

# Backup directory
BACKUP_DIR="/var/backups/postgresql"

# Date format for the backup file
DATE=$(date +"%Y-%m-%d_%H-%M-%S")

# The backup file name
BACKUP_FILE="$BACKUP_DIR/$DB_NAME-$DATE.dump"

# Create the backup using pg_dump
# Use PGPASSWORD or a .pgpass file to handle the password securely
# For simplicity here we assume you have a .pgpass file set up
pg_dump -U $DB_USER -h localhost -d $DB_NAME -F c -f $BACKUP_FILE

echo “Backup created successfully: $BACKUP_FILE”

# Optional Clean up old backups older than 7 days
find $BACKUP_DIR -type f -name "*.dump" -mtime +7 -exec rm {} \;

echo “Old backups cleaned up.”

This script does two things. It creates a new backup with a timestamp in the filename. Then it uses the find command to delete any backup files in that directory older than seven days. This prevents your server’s disk from filling up.

Make sure the script is executable.

chmod +x backup_script.sh

A note on passwords. You should not put passwords directly in scripts. The standard way to handle this with PostgreSQL tools is a .pgpass file in the user’s home directory. It’s a simple text file with permissions set to 600 so only you can read it.

Scheduling with Cron

The final step is to run this script automatically. The classic tool for this is cron. Open your crontab for editing.

crontab -e

Then add a line to schedule your script. This example runs the script every day at 2 AM.

# Minute (0-59) Hour (0-23) Day of month (1-31) Month (1-12) Day of week (0-6)
0 2 * * * /path/to/your/backup_script.sh > /var/log/pg_backup.log 2>&1

This line tells cron to execute your script at minute 0 of hour 2 every day. We also redirect the output of the script to a log file. This is important. You need to check this log file periodically to ensure the backups are still running correctly.

When This Isn’t Enough

This pg_dump and cron strategy is incredibly robust. It will serve you well for a long time.

When might you need more? If your database is hundreds of gigabytes large the pg_dump process might take too long or lock tables in ways that impact your application’s performance. At that scale you should start researching physical backups with pg_basebackup and Point-in-Time Recovery (PITR). This lets you restore to any single moment in time. But for 95% of applications it is overkill. Start with the simple thing that works.

What you have now is a simple durable and automated backup system. You've eliminated a huge source of risk.

— Rishi Banerjee
September 2025