Logs are the lifeline of any production system. When something breaks at 3 AM, your logs are what stand between a quick fix and hours of frantic debugging. Self-hosted Supabase gives you full control over your logging infrastructure, but that control comes with complexity. Unlike Supabase Cloud where logs just work, self-hosting means understanding and configuring Logflare, Vector, and their intricate relationship with your database and services.
This guide covers everything you need to know about log management for self-hosted Supabase—from understanding the architecture to production-ready configurations that will keep you sane when things go wrong.
Understanding Supabase's Logging Architecture
Self-hosted Supabase uses a two-component logging system: Vector handles log collection and routing, while Logflare provides storage, querying, and the analytics UI you see in Studio.
Vector is a high-performance observability data pipeline that captures stdout/stderr from all your Supabase containers. It transforms these logs into a standardized format and routes them to Logflare. Think of Vector as the "pipes" that move your logs, and Logflare as the "database" that stores and queries them.
Here's the flow:
Container logs → Vector (collection/transform) → Logflare (storage/query) → Studio UI
Both services are defined in your docker-compose.yml and are required for the full Supabase Studio logging experience. If you've ever wondered why your Studio logs tab shows nothing, it's almost always a Vector or Logflare configuration issue.
Essential Environment Variables
Before diving into configuration, you need to understand the key environment variables that control logging. Many self-hosters miss these and end up with broken or incomplete logs.
Logflare Configuration
# Required: Enable single-tenant mode for self-hosting LOGFLARE_SINGLE_TENANT=true # Required: Seeds Supabase-specific sources and schemas LOGFLARE_SUPABASE_MODE=true # Database connection for log storage LOGFLARE_POSTGRES_BACKEND_URL=postgresql://supabase_admin:your_password@db:5432/supabase # API key for Vector authentication LOGFLARE_API_KEY=your-api-key
Vector Configuration
# Must match LOGFLARE_API_KEY LOGFLARE_PUBLIC_ACCESS_TOKEN=your-api-key # Logflare endpoint LOGFLARE_URL=http://analytics:4000
The critical detail here is that LOGFLARE_PUBLIC_ACCESS_TOKEN and LOGFLARE_API_KEY must match. Vector uses this token to authenticate its log ingestion requests. Get this wrong and your logs silently disappear into the void.
For production deployments, never use the example values from Supabase's .env.example file. Generate unique secrets for each variable. Check the environment variables guide for a complete reference.
Vector Pipeline Configuration
Vector's configuration lives in volumes/logs/vector.yml. The default setup works, but understanding it helps when you need to customize or troubleshoot.
The pipeline has three main stages:
1. Source: Capturing Docker Logs
sources:
docker_logs:
type: docker_logs
This captures all container stdout/stderr. Simple but effective.
2. Transform: Structuring Log Data
transforms:
project_logs:
type: remap
inputs:
- docker_logs
source: |-
.project = "default"
.event_message = .message
.appname = .container_name
The transform stage adds metadata and standardizes the format. The .appname field is crucial—it's used for routing logs to the correct Logflare source.
3. Routing and Sinks
transforms:
router:
type: route
inputs:
- project_logs
route:
kong: '.appname == "supabase-kong"'
auth: '.appname == "supabase-auth"'
rest: '.appname == "supabase-rest"'
# ... other routes
sinks:
logflare_auth:
type: http
inputs:
- router.auth
uri: http://analytics:4000/api/logs?source_name=gotrue.logs.prod
method: post
auth:
strategy: bearer
token: ${LOGFLARE_API_KEY}
Each Supabase service gets its own route and sink. Logs flow to specific Logflare "sources" based on container name. This is how Studio knows to show Auth logs under the Auth tab and PostgREST logs under API.
Choosing Your Storage Backend
Logflare supports two storage backends: Postgres and BigQuery. This choice significantly impacts your logging experience.
Postgres Backend (Default for Self-Hosted)
Pros:
- No external dependencies
- Everything stays on your infrastructure
- Simple to set up
Cons:
- Not optimized for high-volume log ingestion
- Query performance degrades with scale
- Can impact your main database if sharing resources
The Postgres backend works well for development and small-to-medium production workloads. If you're running a few projects with moderate traffic, it's fine.
BigQuery Backend (Production Recommended)
Supabase Platform uses BigQuery for a reason—it's designed for exactly this use case. BigQuery handles massive log volumes efficiently and provides fast analytical queries.
LOGFLARE_BACKEND=bigquery GOOGLE_PROJECT_ID=your-gcp-project GOOGLE_PROJECT_NUMBER=123456789 GOOGLE_DATASET_ID_APPEND=_prod
The tradeoff: you're adding Google Cloud as a dependency. For teams already using GCP or those with high log volumes, it's worth it. For simpler setups prioritizing independence, stick with Postgres and accept the limitations.
Production Logging Best Practices
After helping teams configure self-hosted Supabase through Supascale, I've seen common patterns that separate smooth operations from logging nightmares.
1. Set Appropriate Log Retention
Logs grow fast. Configure retention policies before your disk fills up:
-- In Logflare's Postgres backend ALTER TABLE log_events SET (autovacuum_vacuum_threshold = 1000); -- Consider partitioning by date for easier cleanup
For BigQuery, set up table expiration policies in GCP console.
2. Filter Noisy Logs at the Source
Health checks can flood your logs. Filter them in Vector:
transforms:
filter_health:
type: filter
inputs:
- docker_logs
condition: '!contains(string!(.message), "/health")'
3. Add Custom Metadata
Enhance logs with deployment-specific context:
transforms:
enrich:
type: remap
inputs:
- docker_logs
source: |-
.environment = "production"
.deployment_id = "prod-us-east"
This becomes invaluable when you're running multiple Supabase instances.
4. Set Up Log Alerts
Logs without alerts are just data. Configure Vector to also send critical logs to your alerting system:
sinks:
slack_errors:
type: http
inputs:
- router.auth
uri: https://hooks.slack.com/services/YOUR/WEBHOOK
method: post
encoding:
codec: json
request:
headers:
Content-Type: application/json
condition: 'contains(string!(.message), "error")'
Common Logging Issues and Solutions
Logs Not Appearing in Studio
Symptom: Studio's Logs tab shows nothing or only partial logs.
Diagnosis:
- Check if Vector is running:
docker ps | grep vector - Verify Logflare is healthy:
curl http://localhost:4000/health - Check Vector logs:
docker logs supabase-vector
Common fixes:
- Ensure
LOGFLARE_PUBLIC_ACCESS_TOKENmatchesLOGFLARE_API_KEY - Verify network connectivity between Vector and Logflare
- Check that Vector's
volumes/logs/vector.ymlis mounted correctly
High Memory Usage from Logflare
Symptom: Logflare container consuming excessive memory.
Solution: The Postgres backend buffers logs before batch writing. Under high load, this buffer grows. Options:
- Increase container memory limits
- Reduce log verbosity at the service level
- Switch to BigQuery backend for high-volume deployments
Missing Logs for Specific Services
Symptom: Auth logs work but Storage logs are missing.
Cause: The routing configuration in Vector may not match your container names.
Fix: Check your container names with docker ps and verify they match the routes in vector.yml. Custom container name prefixes (from modified compose files) are a common culprit.
External Log Shipping
For teams with existing observability infrastructure, you might want logs flowing to systems like Datadog, Elastic, or Loki. Vector makes this straightforward—it can ship to multiple destinations simultaneously.
Shipping to Loki
sinks:
loki:
type: loki
inputs:
- project_logs
endpoint: http://loki:3100
labels:
source: supabase
environment: production
Shipping to Datadog
sinks:
datadog:
type: datadog_logs
inputs:
- project_logs
default_api_key: ${DD_API_KEY}
site: datadoghq.com
This doesn't replace Logflare (Studio still needs it), but gives you logs in your preferred analysis tools. The monitoring guide covers integrating these with your broader observability setup.
Security Considerations
Logs often contain sensitive information—user IDs, IP addresses, sometimes even passwords if developers make mistakes. Protect your logging infrastructure:
Restrict Logflare Dashboard Access
The Logflare dashboard (at /dashboard) has no built-in authentication in self-hosted mode. Never expose it to the internet. Options:
- Keep it behind your VPN
- Use your reverse proxy for authentication
- Disable the route entirely if you only use Studio for log viewing
Sanitize Sensitive Data
Configure Vector to redact sensitive fields:
transforms:
sanitize:
type: remap
inputs:
- docker_logs
source: |-
.message = redact(.message, filters: ["pattern"], patterns: ["password=\\S+"])
Audit Log Access
For compliance requirements, treat your logging infrastructure with the same security rigor as your database. The security hardening guide covers broader security practices.
How Supascale Simplifies Log Management
Configuring Logflare and Vector correctly is one of those tasks that's easy to get wrong. Supascale handles the logging setup automatically when you deploy a project—correct environment variables, proper routing, and working Studio integration out of the box.
For teams that want to focus on building rather than infrastructure maintenance, Supascale's one-click deployment removes the logging configuration burden while still giving you full access to customize when needed. You get working logs immediately, with the flexibility to modify Vector configuration for advanced use cases.
Wrapping Up
Effective log management for self-hosted Supabase requires understanding the Vector-Logflare pipeline, choosing the right storage backend for your scale, and following production best practices. The key points:
- Vector collects and routes logs; Logflare stores and queries them
- Match your API keys between Vector and Logflare
- Use Postgres backend for simplicity, BigQuery for scale
- Filter noisy logs and add contextual metadata
- Protect your logging infrastructure—it contains sensitive data
Logs are often an afterthought until they're desperately needed. Taking time to set them up properly pays dividends when you're debugging production issues at odd hours.
Further Reading
- Monitoring Self-Hosted Supabase - Complete observability setup
- Environment Variables Guide - All configuration options
- Security Hardening Guide - Production security practices
- Supabase Official Logging Docs - Reference documentation
