Kubernetes has become the de facto standard for running containerized workloads in production. If you're already running your infrastructure on K8s, deploying self-hosted Supabase alongside your other services makes operational sense. But the reality is more complex than a simple helm install.
This guide walks you through deploying Supabase on Kubernetes using Helm charts, covering the community-maintained charts, production considerations, and the honest trade-offs you need to understand before committing to this path.
Why Kubernetes for Self-Hosted Supabase?
Before diving into the how, let's address the why. Kubernetes deployment makes sense if you:
- Already have a K8s cluster with established operational practices
- Need to scale individual Supabase components independently
- Want to leverage existing monitoring, logging, and secrets management
- Require multi-project deployments across namespaces
- Have compliance requirements that mandate specific infrastructure controls
However, if you're starting fresh or running a single project, Docker Compose is significantly simpler. Kubernetes adds substantial complexity, and complexity has costs.
The Supabase Kubernetes Landscape
The official Supabase team focuses primarily on their managed cloud offering. Self-hosting, including Kubernetes support, is community-driven. This means you'll be working with the supabase-community/supabase-kubernetes repository rather than official Supabase charts.
The community Helm chart deploys 12 separate pods:
- supabase-db: PostgreSQL database (the foundation)
- supabase-auth: GoTrue authentication service
- supabase-rest: PostgREST API layer
- supabase-realtime: WebSocket connections for real-time subscriptions
- supabase-storage: File storage service
- supabase-functions: Edge Functions runtime (Deno)
- supabase-studio: Dashboard UI
- supabase-kong: API gateway
- supabase-meta: Postgres metadata service
- supabase-imgproxy: Image transformation service
- supabase-analytics: Logflare analytics
- supabase-vector: Logging pipeline
That's a lot of moving parts. Each component needs proper resource allocation, health checks, and potentially its own scaling configuration.
Prerequisites
Before starting, ensure you have:
# Kubernetes cluster (1.23+) kubectl version --client # Helm 3.x installed helm version # Sufficient cluster resources # Minimum: 4 CPU cores, 8GB RAM for basic deployment # Recommended: 8+ cores, 16GB+ RAM for production
You'll also need:
- A storage class for persistent volumes
- An ingress controller (nginx, traefik, or similar)
- A way to manage secrets (Kubernetes secrets at minimum, preferably external secrets management)
- DNS records pointing to your cluster's ingress
Step 1: Add the Helm Repository
helm repo add supabase https://supabase-community.github.io/supabase-kubernetes helm repo update
Verify the chart is available:
helm search repo supabase
Step 2: Create Your Namespace
kubectl create namespace supabase
Step 3: Configure Your Values
This is where most deployments succeed or fail. The default values are not production-ready. Create a values.yaml file:
# values.yaml - Production configuration
global:
# Generate these securely - do not use defaults
jwt:
secret: "your-super-secret-jwt-key-at-least-32-characters"
# Database credentials
database:
host: supabase-db
port: 5432
name: postgres
password: "your-secure-database-password"
# Database configuration
db:
enabled: true
image:
tag: 15.1.1.61 # Pin to specific version
persistence:
enabled: true
size: 50Gi
storageClass: "your-storage-class"
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
# Authentication service
auth:
enabled: true
resources:
requests:
memory: "256Mi"
cpu: "100m"
environment:
GOTRUE_SITE_URL: "https://your-app-domain.com"
GOTRUE_URI_ALLOW_LIST: "https://your-app-domain.com/*"
GOTRUE_DISABLE_SIGNUP: "false"
# REST API
rest:
enabled: true
resources:
requests:
memory: "256Mi"
cpu: "100m"
# Realtime subscriptions
realtime:
enabled: true
resources:
requests:
memory: "512Mi"
cpu: "200m"
# Storage
storage:
enabled: true
persistence:
enabled: true
size: 100Gi
storageClass: "your-storage-class"
# Studio dashboard
studio:
enabled: true
environment:
STUDIO_PG_META_URL: "http://supabase-meta:8080"
# Kong API Gateway
kong:
enabled: true
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: api.your-domain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: supabase-tls
hosts:
- api.your-domain.com
Step 4: Generate Secure Secrets
Never use default secrets. Generate proper values:
# JWT Secret (minimum 32 characters) openssl rand -base64 32 # Database password openssl rand -base64 24 # Anon and Service Role keys # Use the Supabase JWT generator or create manually
For production, use Kubernetes external secrets or a vault solution to inject these rather than storing them in values files.
Step 5: Deploy
helm install supabase supabase/supabase \ --namespace supabase \ --values values.yaml \ --wait \ --timeout 10m
The --wait flag ensures Helm waits for all pods to be ready. Initial deployment can take 5-10 minutes as images are pulled and services start.
Step 6: Verify the Deployment
# Check pod status kubectl get pods -n supabase # Expected output (all should be Running) # supabase-analytics-xxx 1/1 Running # supabase-auth-xxx 1/1 Running # supabase-db-xxx 1/1 Running # supabase-functions-xxx 1/1 Running # supabase-kong-xxx 1/1 Running # supabase-meta-xxx 1/1 Running # supabase-realtime-xxx 1/1 Running # supabase-rest-xxx 1/1 Running # supabase-storage-xxx 1/1 Running # supabase-studio-xxx 1/1 Running
If pods are stuck in CrashLoopBackOff, check logs:
kubectl logs -n supabase deployment/supabase-auth
Production Hardening
The basic deployment gets you running, but production requires additional work.
Use an External PostgreSQL Database
The bundled PostgreSQL is convenient but risky. For production, consider:
- Managed PostgreSQL: AWS RDS, GCP Cloud SQL, or Azure Database for PostgreSQL
- Kubernetes operators: CloudNativePG or Zalando Postgres Operator
- StackGres: Purpose-built PostgreSQL operator with enterprise features
To use an external database, disable the bundled one:
db:
enabled: false
global:
database:
host: "your-external-postgres-host"
port: 5432
name: "postgres"
password: "external-db-password"
Ensure your external database has the required extensions: uuid-ossp, pgcrypto, pgjwt, pg_stat_statements, and others that Supabase depends on.
Configure Resource Limits
The sample values above include basic resource requests. Monitor actual usage and adjust:
kubectl top pods -n supabase
Set Up Proper Ingress
The default Kong ingress configuration is minimal. For production:
kong:
ingress:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
The timeout settings are critical for Realtime connections and large file uploads.
Enable Network Policies
Restrict pod-to-pod communication:
networkPolicy: enabled: true # Only allow traffic between Supabase components
The Honest Trade-Offs
Kubernetes deployment of Supabase comes with significant considerations:
Complexity: You're managing 12 interdependent services. When something breaks, debugging requires understanding how these components interact. The community charts work, but they're not battle-tested at the same level as managed solutions.
Maintenance burden: Upgrades require careful coordination. You can't just helm upgrade without testing, especially for database schema changes between Supabase versions.
Community support only: As noted in the official GitHub discussions, issues with the Kubernetes charts should go to the community repository, not Supabase directly.
Resource overhead: Running 12 pods with proper redundancy requires substantial cluster resources. Budget at minimum 8GB RAM and 4 cores for a single project.
When to Consider Alternatives
Kubernetes makes sense for teams with existing K8s expertise and infrastructure. If you're primarily looking to reduce self-hosting complexity, tools like Supascale provide a simpler path.
Supascale manages the underlying Docker deployment, handles automated backups, configures custom domains, and provides a UI for OAuth provider setup - all without requiring Kubernetes knowledge.
For teams that need Kubernetes specifically (existing infrastructure, compliance requirements, or multi-cluster deployment), the community Helm charts are the right choice. Just go in with realistic expectations about the operational commitment.
Troubleshooting Common Issues
Auth service won't start: Usually a JWT secret mismatch. Verify the secret is consistent across all services.
Studio can't connect to database: Check the meta service logs and ensure network policies allow the connection.
Realtime connections dropping: Ingress timeout settings are likely too low. WebSocket connections need extended timeouts.
Storage uploads failing: Check persistent volume permissions and ensure the storage class supports ReadWriteOnce access mode.
Conclusion
Deploying Supabase on Kubernetes is absolutely possible, and for the right teams, it's the correct choice. The community Helm charts provide a solid foundation, but production deployment requires careful configuration, proper secrets management, and ongoing operational attention.
If Kubernetes is already your platform of choice, this guide should get you started. If you're evaluating self-hosting options more broadly, compare the operational overhead against simpler deployment methods.
Ready to simplify your self-hosted Supabase management? Explore Supascale's features or check out our pricing for a one-time purchase that includes unlimited projects.
