You've set up your self-hosted Supabase instance, configured database backups, and feel pretty good about your disaster recovery plan. But here's the uncomfortable question: what happens to your user uploads, profile pictures, and documents when your server fails?
If you're like most self-hosters, your storage backup strategy is... nonexistent. And you're not alone. In community discussions on GitHub, storage backup consistently ranks among the most overlooked aspects of self-hosted Supabase maintenance. Let's fix that.
Why Storage Backup Gets Forgotten
When developers think "backup," they think "database." It's an understandable reflex—your Postgres database holds your application logic, user data, and relationships. Tools like pg_dump are well-documented and straightforward.
But Supabase Storage is different. It's an S3-compatible object storage layer that sits alongside your database, storing files that your database only references. When you upload a file through the Supabase SDK, two things happen:
- The file binary goes to your storage backend (local filesystem or S3-compatible storage like MinIO)
- Metadata about that file goes to Postgres
Back up just your database, and you'll restore references to files that no longer exist. Your app will be riddled with broken images, missing documents, and confused users.
Understanding Your Storage Architecture
Before implementing a backup strategy, you need to understand how self-hosted Supabase Storage actually works. According to the Supabase Storage documentation, the storage service can use different backends:
Local Filesystem (Default Docker Setup)
By default, Supabase's Docker Compose stores files in ./volumes/storage:/var/lib/storage. This is the simplest setup but also the riskiest—everything lives on one machine.
S3-Compatible Storage (MinIO or External S3)
For production setups, most teams configure an S3-compatible backend. This requires setting environment variables:
STORAGE_BACKEND=s3 STORAGE_S3_BUCKET=your-bucket-name STORAGE_S3_ENDPOINT=http://minio:9000 STORAGE_S3_REGION=us-east-1 STORAGE_S3_ACCESS_KEY_ID=your-access-key STORAGE_S3_SECRET_ACCESS_KEY=your-secret-key GLOBAL_S3_FORCE_PATH_STYLE=true # Required for MinIO
One critical detail from community discussions: the TENANT_ID environment variable defaults to "stub," which becomes the root folder in S3. If you've customized this, your backup scripts need to account for it.
Backup Strategies for Each Architecture
Strategy 1: Local Filesystem Backups
If you're using the default local storage, your backup approach is straightforward but requires discipline:
#!/bin/bash
# Simple rsync backup for local Supabase storage
BACKUP_DIR="/backups/supabase-storage"
SOURCE_DIR="/path/to/supabase/volumes/storage"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
rsync -avz --delete "$SOURCE_DIR" "$BACKUP_DIR/$TIMESTAMP/"
# Keep only last 7 daily backups
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;
Pros: Simple, no additional infrastructure needed
Cons: Backups are on the same machine (violates 3-2-1 backup rule), rsync can be slow for large storage volumes
Strategy 2: MinIO with Replication
If you're running MinIO alongside Supabase, you can leverage MinIO's built-in replication features:
# Set up MinIO client mc alias set local http://localhost:9000 your-access-key your-secret-key mc alias set backup https://backup-s3.example.com backup-key backup-secret # Mirror your Supabase bucket to backup location mc mirror --watch --remove local/supabase-storage backup/supabase-storage-backup
The --watch flag enables continuous replication, catching changes as they happen. For scheduled snapshots instead:
# Daily snapshot to backup location mc mirror --overwrite local/supabase-storage backup/supabase-storage-$(date +%Y%m%d)
Strategy 3: External S3 with Versioning
If you're using AWS S3 or another external S3-compatible service, you're in a better position—but there's a catch. As noted in the Supabase S3 compatibility docs, S3 versioning is not supported. Deleted objects are permanently removed.
This means you need external backup mechanisms:
# Using AWS CLI to sync to a backup bucket aws s3 sync s3://supabase-production s3://supabase-backup-$(date +%Y%m%d) \ --source-region us-east-1 \ --region us-west-2
Consider enabling S3 Cross-Region Replication on AWS for automatic disaster recovery, even though Supabase Storage itself doesn't use versioning.
Coordinating Storage and Database Backups
Here's the tricky part: your storage backup needs to be coordinated with your database backup. If you restore a database from Monday but storage from Tuesday, you'll have orphaned files and missing references.
The safest approach is to treat them as a single backup unit:
#!/bin/bash TIMESTAMP=$(date +%Y%m%d_%H%M%S) BACKUP_ROOT="/backups/supabase-$TIMESTAMP" mkdir -p "$BACKUP_ROOT" # 1. Pause writes (optional, for consistency) docker compose pause storage # 2. Database backup docker exec supabase-db pg_dump -U postgres -d postgres > "$BACKUP_ROOT/database.sql" # 3. Storage backup mc mirror local/supabase-storage "$BACKUP_ROOT/storage/" # 4. Resume docker compose unpause storage # 5. Compress and upload tar -czf "$BACKUP_ROOT.tar.gz" "$BACKUP_ROOT" mc cp "$BACKUP_ROOT.tar.gz" remote/backups/
For production systems handling continuous traffic, consider using Postgres's point-in-time recovery (PITR) and matching storage snapshots to specific WAL positions. This is complex—which is why tools like Supascale handle this coordination automatically.
The Restoration Reality Check
Backing up is only half the equation. Have you actually tested restoring your storage?
Here's a restoration checklist:
- Restore the database first - This re-creates the
storage.objectstable with file metadata - Restore storage files - Copy files back to the correct path or S3 bucket
- Verify tenant ID alignment - If your
TENANT_IDchanged, files won't be found - Check permissions - Storage policies in Postgres must match restored bucket structure
- Test file access - Actually download files through your application
A common gotcha: if you restore to a fresh MinIO instance, you need to recreate the bucket with the same name and ensure the storage service can authenticate.
Monitoring for Backup Health
Backups that aren't monitored might as well not exist. Set up alerts for:
- Backup job failures - Cron jobs can fail silently
- Backup age - Alert if the latest backup is older than expected
- Backup size anomalies - A sudden drop might indicate missing files
- Storage growth rate - Know when you're approaching capacity limits
# Simple backup age check LATEST_BACKUP=$(ls -t /backups/supabase-* | head -1) BACKUP_AGE=$(($(date +%s) - $(stat -c %Y "$LATEST_BACKUP"))) if [ $BACKUP_AGE -gt 86400 ]; then echo "WARNING: Latest backup is over 24 hours old" | mail -s "Backup Alert" [email protected] fi
When Self-Managing Storage Backups Gets Painful
Let's be honest about the trade-offs. Managing storage backups for self-hosted Supabase means:
- Writing and maintaining backup scripts
- Setting up monitoring and alerting
- Managing backup storage costs and retention
- Coordinating database and storage backup timing
- Regularly testing restoration procedures
- Handling edge cases (large files, special characters in filenames, permission issues)
For a single project, this is manageable. For multiple Supabase instances, it becomes a full-time job.
This is exactly why we built Supascale's automated backup system. Configure your S3-compatible storage once, and every backup captures both your database and storage files in a consistent snapshot. One-click restore brings back everything—no orphaned files, no missing references.
Best Practices Summary
- Never back up database without storage - They're two halves of the same system
- Use S3-compatible storage in production - Local filesystem is fine for development
- Implement the 3-2-1 rule - 3 copies, 2 media types, 1 offsite
- Test restores regularly - Monthly at minimum, quarterly is too infrequent
- Monitor backup health - Silent failures are the worst kind
- Document your process - Future-you will thank present-you
Conclusion
Storage backup is the forgotten piece of self-hosted Supabase disaster recovery, but it doesn't have to stay that way. Whether you're using local filesystem, MinIO, or external S3, the key is treating storage and database as inseparable backup targets.
For teams who'd rather focus on building features than managing backup infrastructure, Supascale offers automated, coordinated backups with one-click restore—because your user's files matter as much as their data.
