If you've ever seen the dreaded "max connections reached" error on your self-hosted Supabase instance, you're not alone. Connection pooling is one of the most misunderstood aspects of self-hosting Supabase, and getting it wrong can bring your application to a grinding halt during traffic spikes.
This guide walks you through everything you need to know about connection pooling in self-hosted Supabase—from understanding how Supavisor works to configuring it correctly for production workloads.
Why Connection Pooling Matters
PostgreSQL has a hard limit on concurrent connections, typically defaulting to 100. Each connection consumes memory (roughly 10MB per connection), and establishing new connections adds latency. Without pooling, a simple traffic spike can exhaust your database connections in seconds.
Connection pooling solves this by maintaining a pool of reusable database connections. Instead of each request opening a new connection, requests share connections from the pool—dramatically reducing overhead and improving throughput.
For serverless environments or applications with bursty traffic patterns, connection pooling isn't optional—it's essential.
How Self-Hosted Supabase Handles Connection Pooling
Supabase's Docker Compose setup includes Supavisor, their cloud-native connection pooler. Unlike PgBouncer (which Supabase also supports), Supavisor was built specifically for multi-tenant environments and can scale to handle millions of connections.
In a self-hosted setup, Supavisor provides two connection modes through different ports:
| Port | Mode | Use Case |
|---|---|---|
| 5432 | Session | Long-lived connections, prepared statements |
| 6543 | Transaction | Serverless, high-concurrency workloads |
Transaction mode is what most applications need. It releases connections back to the pool after each transaction, allowing many more clients to share fewer database connections.
Session mode behaves like a direct Postgres connection—the connection stays assigned to one client for the entire session. Use this when you need prepared statements or other session-specific features.
Common Connection Pooling Issues (And How to Fix Them)
Based on community discussions on GitHub and forums, here are the most frequent connection pooling problems self-hosters encounter:
Issue 1: Dashboard Configuration Doesn't Work
If you're trying to configure connection pooling through the Supabase Studio dashboard on a self-hosted instance, you'll notice the settings page either shows a loading spinner indefinitely or returns 404 errors.
This is expected behavior. The dashboard pooling configuration is designed for Supabase Cloud. For self-hosted setups, all configuration happens through environment variables.
Issue 2: "Tenant or User Not Found" Errors
When connecting via the pooler, you might see:
[XX000] FATAL: Tenant or user not found
This happens because Supavisor requires a tenant ID in the connection string. In your .env file, you should have:
POOLER_TENANT_ID=your-tenant-id
Your connection string must include this tenant ID. The correct format is:
postgres://postgres.your-tenant-id:YOUR_PASSWORD@localhost:6543/postgres
Note the postgres.your-tenant-id username format—this is required for Supavisor to route the connection correctly.
Issue 3: supabase_admin Using Excessive Connections
A frequently reported issue is the supabase_admin role consuming 50+ connections even with minimal load. This happens because various Supabase services (Auth, Storage, Realtime) maintain their own connection pools.
To mitigate this, adjust your PostgreSQL max_connections setting. For production self-hosted deployments, consider:
-- In postgresql.conf or via ALTER SYSTEM ALTER SYSTEM SET max_connections = 200;
Then restart PostgreSQL for changes to take effect.
Issue 4: Inconsistent Latency
Some users report variable latency when using Supavisor. If you're experiencing this:
- Ensure Supavisor has adequate resources (it's a separate container)
- Check that your pool size matches your workload
- Consider increasing
default_pool_sizefor high-traffic applications
Configuring Connection Pooling Step by Step
Here's how to properly configure connection pooling for your self-hosted Supabase instance:
Step 1: Set Environment Variables
In your .env file, ensure these variables are configured:
# Required encryption keys SECRET_KEY_BASE=your-64-char-secret-key VAULT_ENC_KEY=your-32-char-key # Pooler configuration POOLER_TENANT_ID=your-tenant-id POOLER_DEFAULT_POOL_SIZE=20 POOLER_MAX_CLIENT_CONN=100
Generate the required keys:
# For SECRET_KEY_BASE (minimum 64 characters) openssl rand -base64 48 # For VAULT_ENC_KEY (exactly 32 characters) openssl rand -hex 16
Step 2: Choose Your Connection String
For Transaction Mode (recommended for most apps):
postgres://postgres.your-tenant-id:YOUR_PASSWORD@your-server:6543/postgres
For Session Mode (when you need prepared statements):
postgres://postgres.your-tenant-id:YOUR_PASSWORD@your-server:5432/postgres
Step 3: Configure Pool Size
The pool size determines how many server-side connections Supavisor can open to PostgreSQL. A good rule of thumb from the Supabase documentation:
- If heavily using PostgREST: Keep pool size under 40% of
max_connections - Otherwise: You can commit up to 80% to the pool
This leaves room for Auth, Storage, and other services to function.
Step 4: Update Your Application
Most Supabase client libraries work seamlessly with pooled connections. However, ensure you're using the pooler connection string (port 6543) rather than direct connections when appropriate.
For the Supabase JavaScript client, this happens automatically when using the standard client initialization. For direct database connections (like with Prisma or Drizzle), explicitly use the pooler URL:
// Direct database access through pooler const connectionString = process.env.DATABASE_URL; // Use port 6543
Transaction Mode vs Session Mode: When to Use Each
This decision trips up many developers. Here's a clear breakdown:
Use Transaction Mode (port 6543) when:
- Running serverless functions (Vercel, Netlify, AWS Lambda)
- Your app opens many short-lived connections
- You don't use prepared statements
- High concurrency is expected
Use Session Mode (port 5432) when:
- You need prepared statements
- Running long-lived background jobs
- Your connection maintains state across queries
- Using features that require session persistence
Most web applications should default to transaction mode. The Supabase docs recommend this for serverless environments.
Avoiding Double Pooling
A subtle but important consideration: if your application already uses client-side pooling (like tarn.js with Knex or connection pools in your ORM), you might end up with double pooling.
While this generally works, it can lead to:
- Inefficient connection usage
- Harder debugging when issues arise
- Potential connection exhaustion under edge cases
If you're using Supavisor, consider disabling or reducing your application-side pool size. Let Supavisor handle the pooling—that's what it's designed for.
Monitoring Connection Usage
To keep an eye on your connection usage, run this query on your PostgreSQL instance:
SELECT usename, count(*) as connections FROM pg_stat_activity GROUP BY usename ORDER BY connections DESC;
For continuous monitoring, check out our guide on monitoring self-hosted Supabase which covers setting up Prometheus and Grafana for connection metrics.
How Supascale Simplifies This
Managing connection pooling, environment variables, and troubleshooting configuration issues is exactly the kind of operational overhead that makes self-hosting challenging.
Supascale handles connection pooling configuration automatically when you deploy projects. The dashboard provides visibility into your database connections without requiring manual environment variable management. You deploy your project, and pooling just works.
For teams evaluating whether to self-host, our cost comparison guide breaks down the operational overhead involved in managing these configurations manually versus using a management platform.
Troubleshooting Checklist
If you're still having connection issues after following this guide:
- Verify Supavisor is running:
docker ps | grep supavisor - Check Supavisor logs:
docker logs supabase-supavisor - Confirm tenant ID matches: The ID in your
.envmust match your connection string - Test the connection directly: Use
psqlwith your pooler connection string - Review PostgreSQL logs: Connection errors often appear there first
- Check resource usage: Supavisor needs CPU and memory—ensure the container isn't starved
Conclusion
Connection pooling is critical infrastructure for any production Supabase deployment. Self-hosted setups require manual configuration that Supabase Cloud handles automatically, but the payoff—cost savings and data control—makes it worthwhile for many teams.
The key takeaways:
- Use port 6543 for transaction mode (most applications)
- Configure
POOLER_TENANT_IDin your environment - Size your pool appropriately (40-80% of max_connections)
- Monitor connection usage proactively
For teams wanting the benefits of self-hosting without the configuration complexity, Supascale provides automated deployment with sensible defaults—including properly configured connection pooling out of the box.
