Building a SaaS application means isolating customer data securely while keeping your infrastructure manageable. If you're self-hosting Supabase, understanding multi-tenancy patterns is crucial—get it wrong, and you're looking at data leaks, performance nightmares, or an operational burden that defeats the purpose of self-hosting in the first place.
This guide walks through the practical approaches to multi-tenant architecture with self-hosted Supabase, including the trade-offs that will determine which pattern fits your use case.
What Is Multi-Tenancy (And Why Should You Care)?
Multi-tenancy means serving multiple customers (tenants) from a single application instance while keeping their data isolated. In the context of Supabase, you have three main approaches:
- Single database, shared tables — All tenants share the same tables, isolated by a
tenant_idcolumn - Single database, separate schemas — Each tenant gets their own PostgreSQL schema within one database
- Separate databases/projects — Each tenant gets an entirely separate Supabase instance
Each approach carries different trade-offs in complexity, cost, and isolation guarantees.
Approach 1: Shared Tables with Row-Level Security
This is the most common pattern for SaaS applications and the one PostgreSQL's Row-Level Security (RLS) was designed for. Every table includes a tenant_id column, and RLS policies ensure users can only access rows belonging to their tenant.
Setting Up the Foundation
First, create a tenants table and add tenant_id to your data tables:
-- Create tenants table CREATE TABLE public.tenants ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW() ); -- Example: A projects table with tenant isolation CREATE TABLE public.projects ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), tenant_id UUID NOT NULL REFERENCES public.tenants(id), name TEXT NOT NULL, created_at TIMESTAMPTZ DEFAULT NOW() ); -- Index for performance CREATE INDEX idx_projects_tenant_id ON public.projects(tenant_id);
Storing Tenant ID in User Metadata
The key to making this work is storing the tenant ID in the user's app_metadata. You must use app_metadata and not user_metadata—the former is server-controlled and secure, while the latter can be modified by the user.
When a user signs up or is invited to a tenant:
-- Function to set tenant_id on user (call from server-side only)
CREATE OR REPLACE FUNCTION set_user_tenant(user_id UUID, tenant_id UUID)
RETURNS VOID AS $$
BEGIN
UPDATE auth.users
SET raw_app_meta_data = raw_app_meta_data || jsonb_build_object('tenant_id', tenant_id)
WHERE id = user_id;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
Creating RLS Policies
Now create policies that enforce tenant isolation:
-- Enable RLS
ALTER TABLE public.projects ENABLE ROW LEVEL SECURITY;
-- Policy: Users can only see their tenant's projects
CREATE POLICY "Users can view own tenant projects"
ON public.projects
FOR SELECT
USING (
tenant_id = (auth.jwt() -> 'app_metadata' ->> 'tenant_id')::UUID
);
-- Policy: Users can only insert into their tenant
CREATE POLICY "Users can insert into own tenant"
ON public.projects
FOR INSERT
WITH CHECK (
tenant_id = (auth.jwt() -> 'app_metadata' ->> 'tenant_id')::UUID
);
-- Similar policies for UPDATE and DELETE
Pros and Cons
Advantages:
- Simplest to maintain—single schema, single set of migrations
- Easy to query across tenants for admin dashboards
- Most cost-effective—one database serves all tenants
- Works well with Supabase's backup and restore capabilities
Disadvantages:
- Performance can degrade with millions of rows (indexing is critical)
- All tenants share database resources
- Some compliance requirements (healthcare, finance) may require stronger isolation
- A bug in your RLS policy affects all tenants
Approach 2: Schema-Per-Tenant
For applications requiring stronger isolation without the complexity of separate databases, schema-per-tenant provides a middle ground. Each tenant gets their own PostgreSQL schema with identical table structures.
Implementation Overview
-- Function to create a new tenant schema
CREATE OR REPLACE FUNCTION create_tenant_schema(tenant_name TEXT)
RETURNS TEXT AS $$
DECLARE
schema_name TEXT := 'tenant_' || replace(tenant_name, '-', '_');
BEGIN
EXECUTE format('CREATE SCHEMA %I', schema_name);
-- Create tables in the new schema
EXECUTE format('
CREATE TABLE %I.projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
)', schema_name);
RETURN schema_name;
END;
$$ LANGUAGE plpgsql;
You'll need to dynamically set the search_path based on the authenticated user's tenant:
-- Set search_path based on user's tenant
CREATE OR REPLACE FUNCTION set_tenant_context()
RETURNS VOID AS $$
DECLARE
tenant_schema TEXT;
BEGIN
tenant_schema := 'tenant_' || (auth.jwt() -> 'app_metadata' ->> 'tenant_id');
EXECUTE format('SET search_path TO %I, public', tenant_schema);
END;
$$ LANGUAGE plpgsql;
The Scaling Problem
This approach hits walls faster than you might expect. As the number of tenant schemas grows, PostgreSQL's catalog tables balloon. Foreign key management becomes a nightmare, and migration tooling (including Supabase's) isn't designed for hundreds of schemas.
From community discussions, teams start seeing performance degradation around 50-100 schemas. If you're building for enterprise customers where you'll have dozens of tenants, this can work. If you're building a product for thousands of small teams, it won't.
Approach 3: Project-Per-Tenant
For maximum isolation, you can run separate Supabase instances for each tenant. With a tool like Supascale, this becomes practical because spinning up new instances is automated rather than manual.
When This Makes Sense
- Compliance requirements — Healthcare (HIPAA) or financial services may mandate complete data isolation
- Enterprise customers — Large customers often want dedicated infrastructure
- Resource isolation — One tenant's traffic spike won't affect others
- Geographic requirements — Deploy tenant instances in specific regions
The Operational Challenge
Running multiple Supabase instances multiplies your operational burden:
- Each instance needs monitoring
- Backups must be configured per-instance
- Updates and migrations across dozens of instances become complex
- SSL certificates and custom domains per instance
This is where Supascale shines—it provides a single dashboard to manage multiple self-hosted Supabase projects, with automated backups, centralized monitoring, and simplified deployments. Rather than writing scripts to manage 50 Docker Compose files, you get a control plane.
Security Considerations Across All Approaches
Regardless of which pattern you choose, several security practices apply:
Always Index Your Tenant Column
Without proper indexing, queries with WHERE tenant_id = X will table-scan as your data grows:
-- Composite indexes for common query patterns CREATE INDEX idx_projects_tenant_created ON public.projects(tenant_id, created_at DESC);
Use Tenant-Aware Functions
Wrap common operations in functions that enforce tenant context:
CREATE OR REPLACE FUNCTION get_tenant_projects() RETURNS SETOF public.projects AS $$ BEGIN RETURN QUERY SELECT * FROM public.projects WHERE tenant_id = (auth.jwt() -> 'app_metadata' ->> 'tenant_id')::UUID ORDER BY created_at DESC; END; $$ LANGUAGE plpgsql SECURITY DEFINER;
Test Your RLS Policies
Before going to production, verify your policies work correctly:
-- Test as a specific user
SET request.jwt.claim.sub = 'user-uuid-here';
SET request.jwt.claims = '{"app_metadata": {"tenant_id": "tenant-uuid"}}';
-- This should only return rows for that tenant
SELECT * FROM public.projects;
Authentication Limitations to Know
Supabase's authentication has a limitation with multi-tenancy: a user can't belong to multiple tenants simultaneously with different roles. The auth.users table requires unique emails, and app_metadata stores a single tenant context.
If users need access to multiple tenants (like a consultant working with multiple clients), you have two options:
- Organization/workspace model — Store tenant memberships in a separate table and check permissions in your application logic
- Multiple accounts — Users create separate accounts per tenant (not ideal for UX)
Many SaaS applications implement an organization switcher that updates the user's active tenant context via a server-side call.
Choosing Your Pattern
Here's a decision framework:
| Criteria | Shared Tables | Schema-Per-Tenant | Project-Per-Tenant |
|---|---|---|---|
| Number of tenants | Thousands | Dozens | Few to dozens |
| Compliance needs | Standard | Moderate | Strict |
| Operational complexity | Low | Medium | High (without tooling) |
| Cost per tenant | Lowest | Low | Higher |
| Data isolation | Logical | Stronger logical | Complete |
| Cross-tenant queries | Easy | Possible | Requires aggregation |
For most SaaS startups, shared tables with RLS provides the right balance. You get strong isolation guarantees with PostgreSQL's battle-tested RLS, minimal operational overhead, and the flexibility to serve thousands of tenants.
If compliance or enterprise requirements demand more, consider project-per-tenant with proper tooling. Self-hosting makes this economically viable compared to Supabase Cloud pricing, especially at scale.
Managing Multiple Tenant Instances with Supascale
If you do go the project-per-tenant route, Supascale's multi-project management eliminates most of the operational burden:
- Centralized backup management — Configure S3 backup destinations once, apply to all instances
- Unified OAuth configuration — Set up OAuth providers through a consistent UI
- One-click provisioning — Spin up new tenant instances in minutes
- API-driven automation — Integrate tenant provisioning into your signup flow
The one-time purchase model means your cost doesn't scale with the number of instances you manage—you pay once and run unlimited projects.
Conclusion
Multi-tenancy in self-hosted Supabase isn't a solved problem with a single answer. The right architecture depends on your compliance requirements, expected tenant count, and operational capacity.
Start with shared tables and RLS if you're building a typical SaaS. It's what PostgreSQL was designed for, and Supabase's RLS implementation is production-ready. Move to schema-per-tenant or project-per-tenant only when specific requirements demand it.
Whatever approach you choose, proper indexing, tested RLS policies, and understanding the authentication limitations will save you from painful refactors later.
