You've deployed Supabase on your own server and everything looks good in development. But how do you know it'll survive real traffic? Will your $50/month Hetzner VPS handle 1,000 concurrent users? At what point does your PostgreSQL database start dropping connections?
These questions keep developers up at night before launch. Load testing answers them before your users do.
Why Load Test Self-Hosted Supabase?
With Supabase Cloud, the platform handles scaling concerns for you. Self-hosting shifts that responsibility to your shoulders. The trade-off is clear: you save significant costs—up to 8x compared to managed pricing—but you need to validate your infrastructure can handle production workloads.
Load testing your self-hosted Supabase deployment helps you:
- Establish performance baselines before launch
- Identify bottlenecks in your specific hardware configuration
- Validate connection pooling settings under stress
- Test failover behavior if you're running high-availability setups
- Right-size your infrastructure to avoid over-provisioning
The goal isn't to match enterprise benchmarks. It's to understand exactly what your setup can handle and where it breaks.
Setting Up k6 for Supabase Load Testing
k6 is the best tool for load testing Supabase deployments. It's open-source, scriptable in JavaScript, and generates detailed metrics that help pinpoint issues.
First, install k6:
# macOS brew install k6 # Linux (Debian/Ubuntu) sudo apt-get install k6 # Docker docker pull grafana/k6
Create a basic test script that exercises your Supabase REST API:
// supabase-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
const SUPABASE_URL = __ENV.SUPABASE_URL;
const ANON_KEY = __ENV.SUPABASE_ANON_KEY;
export const options = {
stages: [
{ duration: '1m', target: 50 }, // Ramp up to 50 users
{ duration: '3m', target: 50 }, // Stay at 50 users
{ duration: '1m', target: 100 }, // Ramp up to 100 users
{ duration: '3m', target: 100 }, // Stay at 100 users
{ duration: '1m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests under 500ms
http_req_failed: ['rate<0.01'], // Less than 1% failure rate
},
};
export default function () {
const headers = {
'apikey': ANON_KEY,
'Authorization': `Bearer ${ANON_KEY}`,
'Content-Type': 'application/json',
};
// Test: Fetch rows from a table
const readRes = http.get(
`${SUPABASE_URL}/rest/v1/your_table?select=*&limit=10`,
{ headers }
);
check(readRes, {
'read status is 200': (r) => r.status === 200,
'read response time < 200ms': (r) => r.timings.duration < 200,
});
sleep(1);
}
Run the test with your environment variables:
k6 run \ -e SUPABASE_URL=https://your-supabase-url.com \ -e SUPABASE_ANON_KEY=your-anon-key \ supabase-load-test.js
Testing Each Supabase Service
Supabase isn't a monolith—it's roughly 12 interconnected services. A proper load test exercises each component your application actually uses.
PostgREST (REST API)
The REST API is typically your most heavily used endpoint. Test both reads and writes:
export function testPostgREST() {
const headers = {
'apikey': ANON_KEY,
'Authorization': `Bearer ${ANON_KEY}`,
'Content-Type': 'application/json',
'Prefer': 'return=minimal',
};
// Write test
const writeRes = http.post(
`${SUPABASE_URL}/rest/v1/events`,
JSON.stringify({
event_type: 'load_test',
payload: { timestamp: Date.now() },
}),
{ headers }
);
check(writeRes, {
'write status is 201': (r) => r.status === 201,
});
// Read test with filtering
const readRes = http.get(
`${SUPABASE_URL}/rest/v1/events?event_type=eq.load_test&limit=100`,
{ headers }
);
check(readRes, {
'read status is 200': (r) => r.status === 200,
});
}
PostgREST v14 (shipped in early 2026) brings approximately 20% higher throughput for GET requests. If you haven't upgraded your deployment recently, you're leaving performance on the table.
GoTrue (Authentication)
Auth endpoints have different performance characteristics. Password hashing is intentionally slow for security:
export function testAuth() {
// Token refresh (fast)
const refreshRes = http.post(
`${SUPABASE_URL}/auth/v1/token?grant_type=refresh_token`,
JSON.stringify({ refresh_token: __ENV.REFRESH_TOKEN }),
{ headers: { 'Content-Type': 'application/json', 'apikey': ANON_KEY } }
);
check(refreshRes, {
'refresh status is 200': (r) => r.status === 200,
});
// User info lookup (fast)
const userRes = http.get(
`${SUPABASE_URL}/auth/v1/user`,
{ headers: {
'Authorization': `Bearer ${__ENV.ACCESS_TOKEN}`,
'apikey': ANON_KEY,
}}
);
check(userRes, {
'user lookup < 100ms': (r) => r.timings.duration < 100,
});
}
Don't load test signup or password login endpoints at scale—you'll overwhelm bcrypt intentionally. Instead, test token operations that represent actual application usage.
Realtime (WebSockets)
Testing WebSocket connections requires k6's WebSocket API:
import ws from 'k6/ws';
export function testRealtime() {
const url = `wss://${SUPABASE_URL.replace('https://', '')}/realtime/v1/websocket?apikey=${ANON_KEY}&vsn=1.0.0`;
const res = ws.connect(url, {}, function (socket) {
socket.on('open', () => {
// Join a channel
socket.send(JSON.stringify({
topic: 'realtime:public:events',
event: 'phx_join',
payload: {},
ref: '1',
}));
});
socket.on('message', (msg) => {
const data = JSON.parse(msg);
check(data, {
'received heartbeat': (d) => d.event === 'phx_reply',
});
});
socket.setTimeout(() => {
socket.close();
}, 5000);
});
check(res, {
'WebSocket connected': (r) => r && r.status === 101,
});
}
Supabase Realtime handles 10,000+ concurrent connections on properly configured infrastructure. Your bottleneck is usually PostgreSQL's logical replication throughput, not the Realtime service itself.
Storage
Storage load testing focuses on upload/download latency:
import http from 'k6/http';
import { randomBytes } from 'k6/crypto';
export function testStorage() {
const headers = {
'apikey': ANON_KEY,
'Authorization': `Bearer ${SERVICE_KEY}`,
};
// Small file upload
const smallFile = randomBytes(1024); // 1KB
const uploadRes = http.post(
`${SUPABASE_URL}/storage/v1/object/test-bucket/load-test-${Date.now()}.bin`,
smallFile,
{ headers: { ...headers, 'Content-Type': 'application/octet-stream' } }
);
check(uploadRes, {
'upload status is 200': (r) => r.status === 200,
});
}
Interpreting Results and Setting Baselines
After running your tests, k6 outputs metrics like:
http_req_duration..............: avg=45.2ms min=12ms med=38ms max=892ms p(90)=78ms p(95)=124ms http_req_failed................: 0.23% ✓ 47 ✗ 20153 http_reqs......................: 20200 336.67/s vus............................: 100 min=0 max=100
Key metrics to watch:
| Metric | Healthy Target | Warning Signs |
|---|---|---|
| p95 latency | < 500ms | > 1s indicates bottleneck |
| Error rate | < 1% | > 5% means capacity exceeded |
| RPS | Baseline for your hardware | Sudden drops signal throttling |
Document your baselines. A $50/month Hetzner VPS (8 vCPU, 32GB RAM) typically handles 300-500 RPS for simple CRUD operations with properly tuned PostgreSQL.
Common Bottlenecks and Fixes
Connection Pool Exhaustion
The most common failure mode. PostgreSQL has a default connection limit of 100. With Supabase's multiple services all connecting, plus your application, you can exhaust this quickly.
Check connection pooling configuration. PgBouncer in transaction mode lets you handle thousands of concurrent requests with fewer actual connections.
Disk I/O Saturation
PostgreSQL is disk-bound for write-heavy workloads. During load tests, monitor IOPS:
iostat -x 1
If %util consistently hits 100%, you need faster storage. NVMe SSDs are essential for production workloads.
Memory Pressure
Low cache hit rates mean PostgreSQL constantly reads from disk. Check with:
SELECT sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_ratio FROM pg_statio_user_tables;
Target 99%+ cache hit ratio. If it's lower, increase shared_buffers in your PostgreSQL configuration.
PostgREST Schema Cache
Complex schemas can cause schema cache loading delays. PostgREST v14 improved this dramatically—loading dropped from 7 minutes to 2 seconds on complex databases. Still, minimize unnecessary database objects.
Automating Load Tests in CI/CD
Integrate load testing into your CI/CD pipeline to catch performance regressions:
# .github/workflows/load-test.yml
name: Load Test
on:
push:
branches: [main]
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install k6
run: |
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6
- name: Run load test
run: |
k6 run \
-e SUPABASE_URL=${{ secrets.STAGING_SUPABASE_URL }} \
-e SUPABASE_ANON_KEY=${{ secrets.STAGING_ANON_KEY }} \
--out json=results.json \
tests/load/supabase-load-test.js
- name: Check thresholds
run: |
if grep -q '"thresholds":{"http_req_duration":\["p(95)<500"\],"passed":false' results.json; then
echo "Performance regression detected!"
exit 1
fi
Run against staging environments, not production. Use database branching to create isolated test environments with realistic data.
Pre-Launch Checklist
Before going live, validate:
- [ ] Sustained 50% above expected peak traffic for 10+ minutes
- [ ] Error rate stays under 1% at peak load
- [ ] p95 latency remains under your SLA threshold
- [ ] Connection pool doesn't exhaust under stress
- [ ] Recovery time after load spike is under 30 seconds
- [ ] Monitoring dashboards show no concerning patterns
If you're running multiple projects, Supascale's dashboard provides centralized visibility across all deployments, making it easier to spot which project needs attention during load testing.
Further Reading
- PostgreSQL Performance Tuning for Self-Hosted Supabase - Optimize the database layer
- Connection Pooling for Self-Hosted Supabase - Handle more concurrent connections
- Monitoring Self-Hosted Supabase - Set up observability
- Capacity Planning Guide - Right-size your infrastructure
Load testing isn't glamorous, but it's the difference between a confident launch and waking up to angry users. Your self-hosted infrastructure gives you control—load testing helps you use that control wisely.
Ready to simplify self-hosted Supabase management? Check out Supascale's features or view pricing for a one-time purchase that covers unlimited projects.
