If you've used Supabase Cloud, you've probably noticed one of their most developer-friendly features: database branching. It creates a complete Postgres instance for every pull request, making it easy to test migrations and experiment without risking production data.
But here's the catch: branching is a managed-platform-only feature. If you're running self-hosted Supabase, you're on your own.
This guide shows you how to build your own branching-like workflow for self-hosted Supabase, from simple local development setups to full CI/CD integration with preview environments.
Why Database Branching Matters
Before diving into solutions, let's understand why branching matters for database-driven applications:
The problem: Database changes are risky. Unlike code, you can't easily "git checkout" to undo a bad migration. When your schema changes break production, rolling back often means data loss or extended downtime.
What branching solves:
- Test migrations before they hit production
- Give each developer an isolated database
- Create preview environments that match pull requests
- Run integration tests without polluting shared databases
On Supabase Cloud, this is handled automatically. For self-hosted deployments, you'll need to implement these patterns yourself.
Option 1: Local Development with Supabase CLI
The simplest approach uses the Supabase CLI to run a complete Supabase stack on each developer's machine.
Setting Up Local Development
# Install the CLI npm install -g supabase # Initialize in your project supabase init # Start local Supabase supabase start
This spins up all Supabase services (Postgres, Auth, Storage, Realtime) in Docker containers. Each developer gets their own isolated database.
Creating and Applying Migrations
# Create a new migration supabase migration new add_user_profiles # Edit supabase/migrations/[timestamp]_add_user_profiles.sql # Then apply it supabase db reset
The key insight: migrations live in supabase/migrations/ and are version-controlled with your code. When you reset your local database, all migrations are applied in order.
Seeding Development Data
Create supabase/seed.sql for consistent test data:
-- Insert test users
INSERT INTO auth.users (id, email)
VALUES ('00000000-0000-0000-0000-000000000001', '[email protected]');
-- Insert application data
INSERT INTO public.profiles (user_id, display_name)
VALUES ('00000000-0000-0000-0000-000000000001', 'Test User');
Limitation: The Supabase CLI only manages one local database at a time. If you need multiple environments simultaneously (like running tests while developing), you'll need the more advanced approaches below.
Option 2: Docker-Based Environment Switching
For teams that need multiple parallel environments, you can run multiple Supabase instances using different Docker Compose configurations.
Creating Environment-Specific Configurations
Start with the official docker-compose.yml and create variants:
# docker-compose.staging.yml
version: '3.8'
services:
db:
ports:
- "54322:5432" # Different port from dev
environment:
POSTGRES_DB: staging_db
kong:
ports:
- "8001:8000" # Different API port
Launch environments by specifying the compose file:
# Development environment docker compose -f docker-compose.yml up -d # Staging environment (separate terminal/server) docker compose -f docker-compose.staging.yml -p supabase-staging up -d
The -p flag creates a separate project namespace, keeping containers isolated.
Managing Multiple Environments
Here's a practical script for switching contexts:
#!/bin/bash
# switch-env.sh
ENV=${1:-dev}
case $ENV in
dev)
export SUPABASE_URL="http://localhost:8000"
export SUPABASE_ANON_KEY="your-dev-anon-key"
;;
staging)
export SUPABASE_URL="http://localhost:8001"
export SUPABASE_ANON_KEY="your-staging-anon-key"
;;
*)
echo "Unknown environment: $ENV"
exit 1
;;
esac
echo "Switched to $ENV environment"
This approach works well for teams with dedicated staging servers. Check out our guide on Docker Compose best practices for production-ready configurations.
Option 3: CI/CD Pipeline Integration
For true "branching" behavior where each pull request gets its own database, you'll need CI/CD automation.
GitHub Actions Workflow
Here's a workflow that creates ephemeral Supabase environments for pull requests:
# .github/workflows/preview.yml
name: Preview Environment
on:
pull_request:
types: [opened, synchronize, reopened, closed]
jobs:
deploy-preview:
if: github.event.action != 'closed'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Supabase CLI
uses: supabase/setup-cli@v1
with:
version: latest
- name: Start Supabase
run: |
supabase start
supabase db reset
- name: Run Migrations
run: supabase migration up
- name: Run Tests
run: npm test
env:
SUPABASE_URL: http://localhost:54321
SUPABASE_ANON_KEY: ${{ secrets.LOCAL_ANON_KEY }}
- name: Deploy Preview
run: |
# Your deployment logic here
# Could push to a preview URL with PR-specific database
cleanup-preview:
if: github.event.action == 'closed'
runs-on: ubuntu-latest
steps:
- name: Cleanup Preview Environment
run: |
# Tear down PR-specific resources
Using Separate Database Instances
For persistent preview environments, you can dynamically provision databases:
#!/bin/bash
# create-preview-db.sh
PR_NUMBER=$1
DB_NAME="preview_pr_${PR_NUMBER}"
# Create database
psql -h your-postgres-host -U admin -c "CREATE DATABASE $DB_NAME"
# Apply migrations
DATABASE_URL="postgresql://admin:pass@host/$DB_NAME" \
supabase db push
# Output connection info for the PR
echo "Preview database ready: $DB_NAME"
This pattern works especially well if you're using Supascale to manage multiple self-hosted projects. Each preview environment can be a separate project with its own backup configuration.
Option 4: Schema-Based Isolation
If spinning up separate database instances feels heavy, consider PostgreSQL schemas for lighter isolation:
-- Create a schema for each feature branch CREATE SCHEMA feature_user_auth; -- Apply migrations within the schema SET search_path TO feature_user_auth; -- All subsequent operations happen in this schema CREATE TABLE users (...);
Implementing Schema Branching
#!/bin/bash
# branch-schema.sh
BRANCH_NAME=$(git branch --show-current | tr '/' '_')
SCHEMA_NAME="branch_${BRANCH_NAME}"
# Create schema
psql $DATABASE_URL -c "CREATE SCHEMA IF NOT EXISTS $SCHEMA_NAME"
# Copy structure from public schema
pg_dump $DATABASE_URL --schema=public --schema-only | \
sed "s/public\./$SCHEMA_NAME./g" | \
psql $DATABASE_URL
echo "Created schema: $SCHEMA_NAME"
Trade-offs:
- Pros: Lightweight, fast to create, shares compute resources
- Cons: Doesn't isolate Auth/Storage, requires careful search_path management
Best Practices for Self-Hosted Branching
Whatever approach you choose, follow these principles:
1. Migrations Are Your Source of Truth
Never make manual schema changes on any environment. All changes should flow through migration files:
# Generate migration from schema diff supabase db diff --use-migra -f new_feature # Review before committing cat supabase/migrations/*_new_feature.sql
2. Seed Data Separately from Migrations
Keep seed data idempotent and environment-aware:
-- seed.sql
-- Only insert if not exists
INSERT INTO public.config (key, value)
VALUES ('app_version', '1.0.0')
ON CONFLICT (key) DO NOTHING;
3. Automate Environment Variables
Use tools like dotenvx to manage environment-specific secrets:
# .env.development SUPABASE_URL=http://localhost:54321 SUPABASE_ANON_KEY=local-dev-key # .env.staging SUPABASE_URL=https://staging.yourdomain.com SUPABASE_ANON_KEY=staging-key
4. Test Migrations Before Production
Create a pre-production validation step:
# In CI/CD before deploying to production supabase db reset --db-url $STAGING_DB_URL supabase migration up --db-url $STAGING_DB_URL npm run test:integration
When Supascale Simplifies This
Managing multiple self-hosted Supabase instances for different environments can become complex. This is where Supascale helps:
- Multi-project management: Create separate projects for dev, staging, and production without manual Docker juggling
- Automated backups: Each environment gets S3-compatible backup storage, so you can restore any environment to any point in time
- One-click project creation: Spin up new Supabase instances for feature branches through the dashboard or REST API
- Environment isolation: Each project runs independently with its own configuration, custom domains, and OAuth providers
For teams running serious self-hosted deployments, having proper tooling for multi-environment management pays for itself quickly.
Conclusion
Database branching for self-hosted Supabase requires more setup than the managed platform, but it's absolutely achievable. Start simple with local CLI development, then scale up to Docker-based environments or full CI/CD integration as your team grows.
The key is treating database changes like code changes: version-controlled, reviewed, and tested before production. Whether you use separate containers, PostgreSQL schemas, or dynamic database provisioning, the principles remain the same.
Ready to simplify your self-hosted Supabase management? Check out Supascale's pricing - a one-time purchase gives you unlimited project management, automated backups, and the tooling to run professional development workflows.
Further Reading
- How to Deploy Supabase on Your Own Server - Getting started with self-hosting
- Docker Compose for Supabase Production - Production-ready Docker configurations
- Supabase Self-Hosted Backup and Restore Guide - Protecting your development and production databases
- Supabase Local Development Documentation - Official CLI documentation
