Setting up CI/CD for Supabase Cloud is straightforward—Supabase provides built-in GitHub integration that handles migrations automatically. But if you're running self-hosted Supabase, you're responsible for building your own deployment automation.
The challenge isn't just technical. The official Supabase CLI was primarily designed for their managed platform, and many features assume you have a Supabase Cloud project to link to. Self-hosted users have been vocal about this limitation in GitHub discussions, with workarounds ranging from creative to fragile.
This guide shows you how to build reliable CI/CD pipelines for self-hosted Supabase deployments—including migrations, database testing, and multi-environment workflows.
The Self-Hosted CI/CD Challenge
Before diving into solutions, let's understand what makes CI/CD different for self-hosted Supabase compared to the managed platform.
What Works Differently
On Supabase Cloud:
supabase linkconnects your local project to a cloud instance- Migrations run automatically via GitHub integration
- Access tokens authenticate everything seamlessly
On self-hosted:
- No
supabase linksupport—the CLI can't connect to your instance - You need direct database access for migrations
- Environment variables and secrets require manual management
- Edge Functions deployment needs separate handling
As one developer noted, "The Supabase linking cannot accept a self-hosted instance. So if you don't want to expose your database port and URL to the world and want a nice CI/CD workflow, you have to create a small image that will use the CLI to push migrations up to the remote Supabase instance within the same stack."
What You'll Build
By the end of this guide, you'll have:
- Automated migration deployment on every push to main
- Database testing with pgTAP in your CI pipeline
- Preview environments for pull requests
- Edge Functions deployment automation
Prerequisites
Before setting up your pipeline, ensure you have:
- A running self-hosted Supabase instance (deployment guide)
- Git repository with your project code
- Database connection string for your Supabase Postgres instance
- GitHub repository (we'll use GitHub Actions, but concepts apply to GitLab CI/CircleCI)
You'll also need your environment variables configured correctly on your server.
Setting Up GitHub Actions for Migrations
The most common CI/CD need is automating database migrations. Here's a workflow that runs migrations on every push to main:
# .github/workflows/deploy-migrations.yml
name: Deploy Migrations
on:
push:
branches: [main]
paths:
- 'supabase/migrations/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Supabase CLI
uses: supabase/setup-cli@v1
with:
version: latest
- name: Run Migrations
env:
DATABASE_URL: ${{ secrets.PRODUCTION_DATABASE_URL }}
run: |
# Apply migrations directly to database
for migration in supabase/migrations/*.sql; do
echo "Applying $migration..."
psql "$DATABASE_URL" -f "$migration"
done
Why Not Use supabase db push?
You might wonder why we're using raw psql instead of the CLI's built-in migration commands. The supabase db push command requires linking to a Supabase project, which doesn't work with self-hosted instances.
For self-hosted deployments, you have two options:
Option 1: Direct psql (shown above) Simple and reliable, but loses some of the CLI's migration tracking features.
Option 2: Use --db-url flag
supabase db push --db-url "$DATABASE_URL"
The second option works with newer CLI versions and provides better migration state tracking.
Tracking Applied Migrations
To avoid re-running migrations, implement a tracking mechanism:
- name: Run Migrations with Tracking
env:
DATABASE_URL: ${{ secrets.PRODUCTION_DATABASE_URL }}
run: |
# Create tracking table if not exists
psql "$DATABASE_URL" -c "
CREATE TABLE IF NOT EXISTS _migrations (
id SERIAL PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
applied_at TIMESTAMP DEFAULT NOW()
);
"
# Apply only new migrations
for migration in supabase/migrations/*.sql; do
migration_name=$(basename "$migration")
# Check if already applied
applied=$(psql "$DATABASE_URL" -t -c "
SELECT 1 FROM _migrations WHERE name = '$migration_name'
" | xargs)
if [ -z "$applied" ]; then
echo "Applying $migration_name..."
psql "$DATABASE_URL" -f "$migration"
psql "$DATABASE_URL" -c "
INSERT INTO _migrations (name) VALUES ('$migration_name')
"
else
echo "Skipping $migration_name (already applied)"
fi
done
Adding Database Tests with pgTAP
Testing your database changes before they hit production is critical. pgTAP is Postgres's unit testing framework, and it's included in Supabase by default.
Writing pgTAP Tests
Create test files in supabase/tests/:
-- supabase/tests/001-rls-policies.sql
BEGIN;
SELECT plan(3);
-- Test that RLS is enabled on users table
SELECT policies_are(
'public',
'profiles',
ARRAY['Users can view own profile', 'Users can update own profile']
);
-- Test that users can only see their own data
SELECT tests.create_supabase_user('test_user_1');
SELECT tests.authenticate_as('test_user_1');
SELECT is(
(SELECT count(*) FROM profiles)::int,
1,
'User can only see own profile'
);
SELECT tests.clear_authentication();
SELECT * FROM finish();
ROLLBACK;
CI Workflow with Tests
# .github/workflows/test.yml
name: Database Tests
on:
pull_request:
paths:
- 'supabase/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Supabase CLI
uses: supabase/setup-cli@v1
with:
version: latest
- name: Start Supabase
run: supabase start
- name: Run Migrations
run: supabase db reset
- name: Run Tests
run: supabase test db
- name: Stop Supabase
if: always()
run: supabase stop
The CI Performance Problem
Here's an honest challenge: starting the full Supabase stack in CI is slow. The supabase/supabase Docker images total several gigabytes, and GitHub's ubuntu-latest runners download everything fresh on each run.
Some strategies to mitigate this:
1. Cache Docker images:
- name: Cache Docker images uses: satackey/[email protected] continue-on-error: true
2. Run only essential services:
- name: Start minimal Supabase
run: |
supabase start --ignore-health-check
# Only wait for Postgres
until pg_isready -h localhost -p 54322; do sleep 1; done
3. Use self-hosted runners: If you're running many tests, a self-hosted runner with cached images can reduce CI time from 5-10 minutes to under a minute.
Multi-Environment Deployment
For production systems, you'll want separate staging and production environments. Here's a workflow structure that supports both:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main, staging]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set environment
run: |
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
echo "ENV=production" >> $GITHUB_ENV
echo "DATABASE_URL=${{ secrets.PRODUCTION_DB_URL }}" >> $GITHUB_ENV
else
echo "ENV=staging" >> $GITHUB_ENV
echo "DATABASE_URL=${{ secrets.STAGING_DB_URL }}" >> $GITHUB_ENV
fi
- name: Run Migrations
run: |
echo "Deploying to ${{ env.ENV }}"
supabase db push --db-url "${{ env.DATABASE_URL }}"
Implementing Preview Environments
For pull request preview environments, you can spin up temporary Supabase instances. This pairs well with our database branching guide.
# .github/workflows/preview.yml
name: Preview Environment
on:
pull_request:
types: [opened, synchronize]
jobs:
create-preview:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Create Preview Database
env:
ADMIN_DB_URL: ${{ secrets.ADMIN_DATABASE_URL }}
run: |
DB_NAME="preview_pr_${{ github.event.pull_request.number }}"
# Create database
psql "$ADMIN_DB_URL" -c "CREATE DATABASE $DB_NAME"
# Apply migrations
DATABASE_URL="${ADMIN_DB_URL%/*}/$DB_NAME"
supabase db push --db-url "$DATABASE_URL"
# Output connection info
echo "Preview database created: $DB_NAME"
Don't forget cleanup when the PR closes:
on:
pull_request:
types: [closed]
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Drop Preview Database
env:
ADMIN_DB_URL: ${{ secrets.ADMIN_DATABASE_URL }}
run: |
DB_NAME="preview_pr_${{ github.event.pull_request.number }}"
psql "$ADMIN_DB_URL" -c "DROP DATABASE IF EXISTS $DB_NAME"
Deploying Edge Functions
If you're using Edge Functions with self-hosted Supabase, you'll need separate deployment automation:
# .github/workflows/edge-functions.yml
name: Deploy Edge Functions
on:
push:
branches: [main]
paths:
- 'supabase/functions/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Deno
uses: denoland/setup-deno@v1
with:
deno-version: v1.x
- name: Deploy Functions
env:
SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.SERVICE_ROLE_KEY }}
run: |
for func_dir in supabase/functions/*/; do
func_name=$(basename "$func_dir")
echo "Deploying $func_name..."
# Bundle function
deno bundle "$func_dir/index.ts" "/tmp/$func_name.js"
# Deploy to your edge runtime
curl -X POST "$SUPABASE_URL/functions/v1/deploy" \
-H "Authorization: Bearer $SUPABASE_SERVICE_ROLE_KEY" \
-H "Content-Type: application/javascript" \
--data-binary "@/tmp/$func_name.js"
done
Security Considerations
When building CI/CD pipelines for databases, security is paramount:
Secret Management
Never commit secrets. Use GitHub's encrypted secrets for:
- Database connection strings
- Service role keys
- API tokens
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }} # ✓ Good
# DATABASE_URL: "postgres://user:pass@host:5432/db" # ✗ Never do this
Limit Database Permissions
Create a CI-specific database user with limited permissions:
-- Create CI user with migration-only permissions CREATE USER ci_user WITH PASSWORD 'secure_password'; GRANT CONNECT ON DATABASE postgres TO ci_user; GRANT USAGE ON SCHEMA public TO ci_user; GRANT ALL ON ALL TABLES IN SCHEMA public TO ci_user; GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO ci_user; -- But not superuser access -- GRANT pg_execute_server_program TO ci_user; ✗ Don't do this
Network Security
If your Supabase instance isn't exposed to the internet, you'll need to either:
- Run CI within your private network (self-hosted runners)
- Set up a secure tunnel during CI runs
- Whitelist GitHub Actions IP ranges (not recommended—they change frequently)
Simplifying CI/CD with Supascale
If managing all this automation feels overwhelming, Supascale provides a simpler path. With the REST API, you can integrate deployment automation without building everything from scratch:
- name: Trigger Supascale Deployment
run: |
curl -X POST "https://your-supascale-instance/api/v1/projects/$PROJECT_ID/deploy" \
-H "Authorization: Bearer ${{ secrets.SUPASCALE_API_KEY }}" \
-H "Content-Type: application/json"
Supascale handles backups automatically before deployments, provides one-click restore if something goes wrong, and gives you visibility into what's deployed where—without building custom tooling.
Conclusion
Building CI/CD for self-hosted Supabase requires more effort than the managed platform, but it's achievable with the right approach:
- Start with migrations: Automate database changes first—it's the highest-value automation
- Add testing: pgTAP tests catch issues before they reach production
- Build incrementally: Don't try to automate everything at once
- Consider the trade-offs: Sometimes a managed solution or tool like Supascale saves more time than building everything yourself
The self-hosted Supabase community continues to push for better CI/CD support, and Supabase has been investing more in this area. But for now, these patterns will help you build reliable deployment automation for your infrastructure.
