You've been running on Supabase Cloud, but now you need to move to self-hosted. Maybe you're hitting pricing limits with multiple projects, facing compliance requirements that demand data residency, or you simply want full control over your infrastructure. Whatever the reason, migrating from Supabase Cloud to a self-hosted deployment is entirely possible—but it's not as simple as clicking an export button.
This guide walks you through the complete migration process: exporting your database, transferring storage files, configuring authentication, and avoiding the pitfalls that catch most teams mid-migration.
Before You Migrate: What You're Getting Into
Let's be honest about what self-hosting means. Supabase Cloud handles backups, updates, SSL certificates, and monitoring automatically. When you self-host, all of that becomes your responsibility.
Self-hosting makes sense when:
- You're running multiple projects (Cloud charges $25/project/month minimum)
- Compliance requires data residency in specific regions
- You need features without usage-based overage charges
- You want to avoid project pausing on the free tier
Self-hosting adds operational burden:
- You manage updates and security patches
- Backups are your responsibility (see our backup guide)
- Downtime troubleshooting falls on your team
- No official support—only community help
For a detailed cost comparison, check out The True Cost of Self-Hosting Supabase.
Prerequisites
Before starting the migration:
- Self-hosted Supabase instance running: Follow our deployment guide if you haven't set one up yet
- Access to Supabase Cloud project: You'll need the connection string and dashboard access
- PostgreSQL tools:
psqlandpg_dumpinstalled locally - Sufficient storage: Your self-hosted instance needs space for your database plus growth room
- Downtime window: Plan for maintenance—some steps require stopping writes to your cloud database
Step 1: Audit Your Cloud Project
Before exporting anything, document what you're working with:
Check Your Database Size
In Supabase Cloud dashboard, navigate to Database > Database Settings to see your current size. This determines:
- How long the export will take
- Storage requirements on your self-hosted instance
- Whether you need to chunk the migration
List Your Storage Buckets
Go to Storage in the dashboard. Note:
- Number of buckets
- Total file count and size
- Public vs private bucket settings
- Any RLS policies on buckets
Document Auth Configuration
In Authentication > Providers, record:
- Enabled OAuth providers (Google, GitHub, Discord, etc.)
- Redirect URLs configured
- Email template customizations
- Any custom SMTP settings
This documentation prevents "I forgot we had that" moments during migration.
Step 2: Export Your Database
The core of migration is getting your PostgreSQL data from Cloud to self-hosted. Supabase recommends using pg_dump for this.
Get Your Cloud Connection String
In the Supabase Cloud dashboard:
- Go to Settings > Database
- Find the Connection string section
- Copy the URI—it looks like:
postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres
Create the Database Dump
Run this from your local machine:
pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --clean \ --if-exists \ --no-owner \ --no-privileges \ -F c \ -f supabase_backup.dump
Flag explanations:
--clean: Drops existing objects before recreating--if-exists: Prevents errors if objects don't exist--no-owner: Skips ownership commands (important since users differ between instances)--no-privileges: Skips privilege grants (you'll handle these on the target)-F c: Custom format (compressed, allows parallel restore)
For large databases (10GB+), add --jobs=4 for parallel dumping.
Alternative: Schema and Data Separately
For more control, export schema and data separately:
# Schema only pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --schema-only \ --no-owner \ -f schema.sql # Data only pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --data-only \ --no-owner \ -f data.sql
This lets you review and modify the schema before importing.
Step 3: Prepare Your Self-Hosted Instance
Before importing, configure your self-hosted Supabase to avoid conflicts.
Stop Your Application
If anything is writing to your self-hosted database, stop it. You want a clean import without conflicts.
Connect to Self-Hosted PostgreSQL
Find your self-hosted database connection. If using the default Docker setup:
docker exec -it supabase-db psql -U postgres
Or connect via the exposed port (default 5432):
psql -h localhost -p 5432 -U postgres -d postgres
Handle Extension Conflicts
Self-hosted Supabase comes with extensions pre-installed. Your cloud export might try to create them again. If you see errors like ERROR: extension already exists, you have two options:
- Pre-drop conflicting extensions (if you can afford to lose extension data):
DROP EXTENSION IF EXISTS pg_graphql CASCADE; DROP EXTENSION IF EXISTS pg_stat_statements CASCADE; -- Add others as needed
- Edit the dump file to remove CREATE EXTENSION statements (safer for custom format, use
pg_restore --listand--use-list).
Step 4: Import the Database
Using Custom Format Dump
pg_restore -h localhost -p 5432 -U postgres -d postgres \ --clean \ --if-exists \ --no-owner \ --no-privileges \ supabase_backup.dump
Monitor for errors. Common issues:
- Extension conflicts: Handle as described above
- Role doesn't exist: Supabase Cloud has roles like
authenticatorandanonthat might not exist in your self-hosted setup yet - Schema already exists: The default Docker setup creates schemas—use
--cleanto drop first
Using SQL Files
If you exported schema and data separately:
psql -h localhost -p 5432 -U postgres -d postgres -f schema.sql psql -h localhost -p 5432 -U postgres -d postgres -f data.sql
Verify the Import
After import, verify your tables exist and have data:
-- Check table counts SELECT schemaname, tablename FROM pg_tables WHERE schemaname = 'public'; -- Verify row counts on critical tables SELECT COUNT(*) FROM your_important_table;
Step 5: Migrate Storage Files
Database migration doesn't include Storage files. Supabase Storage keeps files in a separate system, and you need to transfer them explicitly.
Option 1: Download and Re-upload (Small Files)
For small storage buckets (under a few GB), the simplest approach:
- Download files from Supabase Cloud dashboard or via the API
- Upload to your self-hosted instance
Using the Supabase JS client:
// Download from cloud
const { data, error } = await cloudClient.storage
.from('your-bucket')
.download('path/to/file.jpg')
// Upload to self-hosted
await selfHostedClient.storage
.from('your-bucket')
.upload('path/to/file.jpg', data)
Option 2: Direct S3 Sync (Recommended for Large Files)
Supabase Storage uses S3-compatible storage. If you have direct access (or can get a temporary export), use AWS CLI:
aws s3 sync s3://cloud-bucket/ s3://self-hosted-bucket/ \ --endpoint-url https://your-self-hosted-storage-endpoint
Recreate Bucket Policies
After file transfer, recreate your bucket settings:
- Public/private status
- RLS policies (these are in the database, so they might have migrated)
- File size limits
Step 6: Configure Authentication
Auth configuration doesn't export with the database. You need to reconfigure OAuth providers manually.
Migrate OAuth Providers
For self-hosted Supabase, OAuth configuration happens in environment variables rather than the dashboard. In your .env or docker-compose.yml:
# Google OAuth GOTRUE_EXTERNAL_GOOGLE_ENABLED=true GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=your-client-id GOTRUE_EXTERNAL_GOOGLE_SECRET=your-client-secret GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=https://your-domain.com/auth/v1/callback # GitHub OAuth GOTRUE_EXTERNAL_GITHUB_ENABLED=true GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=your-client-id GOTRUE_EXTERNAL_GITHUB_SECRET=your-client-secret
Critical: Update your OAuth provider apps (Google Cloud Console, GitHub OAuth Apps, etc.) with new redirect URIs pointing to your self-hosted domain.
For detailed OAuth configuration, see our auth providers documentation.
User Sessions Will Reset
After migration, all user sessions become invalid. Users will need to log in again. This is expected—JWT secrets differ between instances.
Step 7: Update Your Application
Your application code needs to point to the new self-hosted instance.
Update Environment Variables
# Before (Cloud) SUPABASE_URL=https://[PROJECT_REF].supabase.co SUPABASE_ANON_KEY=eyJhbGciOiJI... # After (Self-hosted) SUPABASE_URL=https://api.your-domain.com SUPABASE_ANON_KEY=your-self-hosted-anon-key
Test Critical Paths
Before going live, test:
- User authentication (sign up, sign in, OAuth)
- Database reads and writes
- Storage uploads and downloads
- Realtime subscriptions (if you use them)
- Edge functions (if migrated)
Common Migration Pitfalls
Migrations Don't Auto-Run
A pain point the community discusses frequently: database migrations that ran on Supabase Cloud don't automatically apply to self-hosted. Your dump includes the final schema state, not the migration history. If you use Supabase CLI migrations going forward, you'll need to baseline your migration history.
Extension Version Mismatches
Supabase Cloud might run different PostgreSQL or extension versions than your self-hosted instance. Check compatibility, especially for:
pgvector(AI/vector search)pg_graphqlpgsodium(encryption)
Realtime Subscriptions Need Replication Setup
If you use Supabase Realtime, ensure your self-hosted PostgreSQL has logical replication enabled. The default Docker setup handles this, but custom PostgreSQL installations might not.
Storage URLs Change
Files uploaded to Cloud Storage have URLs like https://[PROJECT_REF].supabase.co/storage/v1/.... After migration, URLs point to your self-hosted domain. Update any hardcoded URLs in your database or application.
Using Supascale for Simpler Migrations
The manual process above works, but it's operationally heavy. Supascale streamlines self-hosted Supabase management with features designed for exactly this use case:
- Automated backups to S3 with one-click restore—no more manual pg_dump scripts
- Custom domains with SSL configured through a UI, not YAML files
- OAuth provider configuration without editing docker-compose.yml
- Selective service deployment—run only what you need
At $39.99 one-time for unlimited projects, it eliminates the ongoing operational tax of self-hosting while keeping you in control of your infrastructure.
Post-Migration Checklist
After migration, ensure:
- [ ] Database queries return expected data
- [ ] User authentication works (including OAuth)
- [ ] Storage files are accessible
- [ ] Realtime subscriptions connect
- [ ] RLS policies are active and working
- [ ] Automated backups are configured (they're not automatic anymore!)
- [ ] Monitoring is in place for your self-hosted instance
- [ ] DNS is updated if switching domains
- [ ] Old Cloud project is paused or deleted (to stop billing)
Conclusion
Migrating from Supabase Cloud to self-hosted is a well-defined process, but it requires careful planning. The database export is straightforward; the complexity lies in storage transfer, auth reconfiguration, and all the operational setup that Cloud handled invisibly.
For teams making this transition, the payoff is significant: predictable costs, data control, and no per-project fees. But go in with realistic expectations about the maintenance burden you're taking on.
If you want the cost benefits of self-hosting without the migration complexity becoming your new full-time job, Supascale handles the operational pieces so you can focus on building your application.
