Point-in-Time Recovery for Self-Hosted Supabase: A Complete Guide

Learn how to implement PITR for self-hosted Supabase using WAL-G and pgBackRest. Achieve second-level recovery granularity.

Cover Image for Point-in-Time Recovery for Self-Hosted Supabase: A Complete Guide

If you've ever lost hours of production data because your daily backup was 23 hours old, you understand why Point-in-Time Recovery (PITR) matters. On Supabase Cloud, PITR is a checkbox and a $100/month add-on. For self-hosted Supabase, you're on your own—but implementing continuous backup is absolutely achievable with the right approach.

This guide walks you through setting up PITR for your self-hosted Supabase instance using WAL archiving, giving you the ability to restore your database to any point in time with second-level granularity.

Why Daily Backups Aren't Enough

Most backup strategies for self-hosted Supabase rely on periodic pg_dump snapshots. This approach has a fundamental flaw: your Recovery Point Objective (RPO) equals the time since your last backup.

Consider this scenario:

  • Daily backup at 2:00 AM
  • Production incident at 11:00 PM
  • Data loss: up to 21 hours of transactions

For hobby projects, that might be acceptable. For production applications handling user data, payments, or compliance-sensitive information, it's not. PITR changes the equation by capturing every transaction as it happens.

Understanding WAL-Based Recovery

PostgreSQL's Write-Ahead Log (WAL) records every change made to your database before it's written to the actual data files. This mechanism exists for crash recovery, but it also enables continuous backup.

Here's how it works:

  1. Base backup: A full physical copy of your database at a specific point
  2. WAL archiving: Continuous capture of all transaction logs
  3. Recovery: Replay WAL files on top of a base backup to reach any desired timestamp

The result: you can restore to any second within your WAL retention window, not just your last snapshot.

Choosing Your Backup Tool

Two tools dominate PostgreSQL continuous backup: WAL-G and pgBackRest. Both are production-proven, but they serve different use cases.

WAL-G

WAL-G is Supabase's choice for their managed platform. It's designed for cloud-native environments with direct S3/GCS/Azure integration.

Best for:

  • Cloud-first deployments
  • Teams already using object storage for backups
  • Simpler configuration requirements
  • Supabase-aligned tooling

pgBackRest

pgBackRest is the enterprise standard with more configuration options and features like parallel backup/restore and built-in backup verification.

Best for:

  • Large databases (multi-terabyte)
  • Complex backup retention policies
  • Organizations with strict compliance requirements
  • Multi-repository backup strategies

For most self-hosted Supabase deployments, WAL-G is the pragmatic choice—it's what Supabase uses internally, and the configuration is straightforward.

Setting Up WAL-G for Self-Hosted Supabase

Let's implement continuous backup with WAL-G and S3-compatible storage. This approach works with AWS S3, MinIO, Cloudflare R2, or any S3-compatible provider.

Prerequisites

  • Running self-hosted Supabase instance
  • S3-compatible storage bucket
  • Access credentials for your storage provider

If you're still setting up your storage backend, check our guide on S3 storage for self-hosted Supabase.

Step 1: Configure PostgreSQL for WAL Archiving

First, modify your PostgreSQL configuration. In your Supabase deployment, locate the postgres container configuration and add these settings:

# postgresql.conf additions
wal_level = replica
archive_mode = on
archive_timeout = 60

The archive_timeout setting forces a WAL segment switch every 60 seconds, ensuring you never lose more than a minute of data even during low-traffic periods.

Step 2: Install WAL-G

Add WAL-G to your postgres container. If you're using Docker Compose, you can extend the image:

FROM supabase/postgres:15.1.0.117

# Install WAL-G
RUN apt-get update && apt-get install -y wget && \
    wget https://github.com/wal-g/wal-g/releases/download/v3.0.0/wal-g-pg-ubuntu-20.04-amd64.tar.gz && \
    tar -xzf wal-g-pg-ubuntu-20.04-amd64.tar.gz && \
    mv wal-g-pg-ubuntu-20.04-amd64 /usr/local/bin/wal-g && \
    chmod +x /usr/local/bin/wal-g && \
    rm wal-g-pg-ubuntu-20.04-amd64.tar.gz

Step 3: Configure WAL-G Environment

Create a configuration file for WAL-G with your S3 credentials:

# /etc/wal-g/env.conf
WALG_S3_PREFIX=s3://your-bucket/wal-g/
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1
# For S3-compatible storage (MinIO, R2, etc.)
AWS_ENDPOINT=https://your-endpoint.com
AWS_S3_FORCE_PATH_STYLE=true

Step 4: Configure Archive Commands

Update your PostgreSQL configuration with the archive commands:

# postgresql.conf
archive_command = 'source /etc/wal-g/env.conf && wal-g wal-push %p'
restore_command = 'source /etc/wal-g/env.conf && wal-g wal-fetch %f %p'

Step 5: Create Your First Base Backup

With WAL archiving configured, create an initial base backup:

source /etc/wal-g/env.conf
wal-g backup-push /var/lib/postgresql/data

This creates a full physical backup that serves as the foundation for point-in-time recovery.

Step 6: Schedule Regular Base Backups

Base backups should run regularly to limit WAL replay time during recovery. Add a cron job:

# Run base backup daily at 3 AM
0 3 * * * source /etc/wal-g/env.conf && wal-g backup-push /var/lib/postgresql/data

Performing Point-in-Time Recovery

When disaster strikes, here's how to restore to a specific timestamp.

Step 1: Stop the Database

docker-compose stop db

Step 2: Prepare Recovery Configuration

Create a recovery configuration specifying your target time:

# recovery.signal file tells PostgreSQL to enter recovery mode
touch /var/lib/postgresql/data/recovery.signal

# Add to postgresql.conf
restore_command = 'source /etc/wal-g/env.conf && wal-g wal-fetch %f %p'
recovery_target_time = '2026-04-01 14:30:00 UTC'
recovery_target_action = 'promote'

Step 3: Restore the Base Backup

# Clear existing data
rm -rf /var/lib/postgresql/data/*

# Restore from the latest base backup before your target time
source /etc/wal-g/env.conf
wal-g backup-fetch /var/lib/postgresql/data LATEST

Step 4: Start Recovery

docker-compose start db

PostgreSQL will:

  1. Load the base backup
  2. Replay WAL files up to your specified recovery_target_time
  3. Promote to primary and accept connections

Check the logs to confirm successful recovery:

docker-compose logs db | grep "recovery"

Managing WAL Retention and Costs

Continuous backup means continuous storage costs. WAL-G supports retention policies to balance protection and expense.

Set Retention Policy

# Keep base backups for 7 days, retain WAL for 14 days
wal-g delete retain FULL 7 --confirm

Monitor Storage Usage

Track your backup size to avoid surprises:

wal-g backup-list
wal-g st ls

For a database with moderate write activity, expect roughly 1-5 GB of WAL per day. High-transaction systems can generate significantly more.

Testing Your Recovery Process

A backup you've never tested isn't a backup—it's a hope. Schedule regular recovery drills:

  1. Monthly: Restore to a test environment and verify data integrity
  2. Quarterly: Full disaster recovery simulation with documented RTO
  3. After major changes: Test recovery after schema migrations or large data loads

For detailed testing procedures, see our guide on testing backup and restore procedures.

The Trade-offs of DIY PITR

Implementing PITR yourself brings real benefits and real costs:

Benefits:

  • No $100/month add-on fee
  • Full control over retention policies
  • Works with any S3-compatible storage
  • No vendor lock-in on your backup infrastructure

Costs:

  • Initial setup complexity
  • Ongoing maintenance responsibility
  • Monitoring and alerting requirements
  • Recovery time depends on your expertise

This is the fundamental trade-off of self-hosting Supabase: you trade operational convenience for cost savings and control.

How Supascale Simplifies Backup Management

If configuring WAL-G and managing retention policies sounds like more operational burden than you want, Supascale handles automated backups with a simpler approach. While it currently uses scheduled snapshots rather than continuous WAL archiving, it provides:

  • One-click S3 backup configuration
  • Automated backup scheduling
  • One-click restore to any saved backup
  • Visual backup management through a clean UI

For many teams, the combination of hourly scheduled backups via Supascale's backup features plus the knowledge to implement PITR manually for critical databases offers the best of both worlds.

Conclusion

Point-in-Time Recovery transforms your self-hosted Supabase backup strategy from "hope we don't lose too much" to "we can recover to any second." The implementation requires upfront effort—configuring WAL-G, setting up archive commands, scheduling base backups—but the result is enterprise-grade disaster recovery on self-hosted infrastructure.

For production deployments where data loss is measured in dollars or compliance violations, PITR isn't optional. For smaller projects, scheduled snapshots via tools like Supascale provide a reasonable balance of protection and simplicity.

The choice depends on your RPO requirements: if losing an hour of data is acceptable, scheduled backups work. If you need second-level granularity, implement WAL-based continuous backup.

Ready to improve your self-hosted Supabase backup strategy? Start with Supascale's automated backup features for scheduled snapshots, then layer in PITR when your requirements demand it.


Further Reading