When you first deploy self-hosted Supabase, files uploaded through Supabase Storage land on your server's local filesystem. This works fine for development, but production deployments need something more robust. Local storage creates backup complications, doesn't scale horizontally, and ties your data to a single server.
The solution is connecting Supabase Storage to an S3-compatible backend. This guide walks through configuring AWS S3, MinIO, Cloudflare R2, and Backblaze B2—covering the environment variables, Docker configuration, and troubleshooting steps you'll encounter along the way.
Why Switch from Local to S3 Storage?
The default file-based storage has several limitations that become painful in production:
Backup complexity: While pg_dump handles your database, local file storage requires separate backup procedures. Miss this, and you've got a database full of references to files that no longer exist.
Horizontal scaling: If you ever need to run multiple Supabase instances or migrate to a different server, local storage becomes a bottleneck. S3-compatible backends decouple your storage from your compute.
Durability guarantees: Cloud object storage providers offer built-in redundancy that local disk can't match. AWS S3 promises 99.999999999% durability. Your single VPS disk doesn't.
Cost efficiency: Dedicated object storage providers often cost less per GB than equivalent VPS disk space, especially at scale.
Understanding the Storage Architecture
Before diving into configuration, it helps to understand how Supabase Storage works. The storage service uses PostgreSQL as its metadata store—tracking file names, bucket configurations, and access policies. The actual file bytes go to whatever backend you configure.
This architecture means you can:
- Keep using Row Level Security policies for access control
- Query file metadata through the standard Supabase client
- Switch backends without changing application code
The storage service exposes two APIs: a REST API for standard operations and an S3-compatible endpoint that works with tools like rclone or the AWS CLI.
Choosing Your S3-Compatible Backend
Several options exist, each with different trade-offs:
AWS S3: The original. Reliable, well-documented, but charges for egress bandwidth. Good choice if you're already in the AWS ecosystem or need enterprise-grade SLAs.
Cloudflare R2: Zero egress fees make this attractive for applications with heavy download traffic. S3-compatible API with some quirks (more on that later).
MinIO: Self-hosted object storage you can run alongside Supabase. Adds operational complexity but keeps everything under your control.
Backblaze B2: Budget-friendly at $0.006/GB/month with free egress up to 3x your storage volume. S3-compatible since 2020.
For most self-hosted deployments, I'd recommend Cloudflare R2 for public-facing applications (zero egress costs) or MinIO if you need everything on-premises.
Configuration Overview
Regardless of which backend you choose, the configuration follows the same pattern. You'll set environment variables in your .env file to tell the storage service where to put files.
The key variables are:
# Backend type: 's3' or 'file' STORAGE_BACKEND=s3 # Your bucket name GLOBAL_S3_BUCKET=your-bucket-name # Provider endpoint (varies by provider) GLOBAL_S3_ENDPOINT=https://s3.amazonaws.com # Region for signature calculation REGION=us-east-1 # Credentials AWS_ACCESS_KEY_ID=your-access-key AWS_SECRET_ACCESS_KEY=your-secret-key # Optional: folder prefix within the bucket TENANT_ID=supabase
The TENANT_ID creates a root folder in your bucket for all Supabase uploads. Useful if you're sharing a bucket with other services.
Configuring AWS S3
For AWS S3, you'll need to:
- Create an S3 bucket in your preferred region
- Create an IAM user with programmatic access
- Attach a policy granting bucket access
- Configure the environment variables
Here's a minimal IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
Your .env configuration:
STORAGE_BACKEND=s3 GLOBAL_S3_BUCKET=your-bucket-name GLOBAL_S3_ENDPOINT=https://s3.us-east-1.amazonaws.com REGION=us-east-1 AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Make sure the endpoint URL includes your region. Using the wrong region causes signature mismatches.
Configuring Cloudflare R2
R2's zero egress fees make it particularly attractive. Configuration is similar to S3 with a few differences.
First, create an R2 bucket in your Cloudflare dashboard and generate an API token with R2 read/write permissions.
STORAGE_BACKEND=s3 GLOBAL_S3_BUCKET=your-r2-bucket GLOBAL_S3_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com REGION=auto AWS_ACCESS_KEY_ID=your-r2-access-key AWS_SECRET_ACCESS_KEY=your-r2-secret-key
Important R2 quirk: R2 doesn't support S3's object tagging feature. If resumable uploads fail with HTTP 500 errors mentioning x-amz-tagging, add this to your storage service environment:
storage:
environment:
TUS_ALLOW_S3_TAGS: "false"
This disables tagging for TUS (resumable) uploads, which R2 can't handle.
Configuring MinIO
MinIO gives you S3-compatible storage on your own infrastructure. This adds another service to manage but keeps everything local.
Add MinIO to your docker-compose.yml:
services:
minio:
image: minio/minio
container_name: supabase-minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin123
command: server /data --console-address ":9001"
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
minio_data:
After starting MinIO, access the console at http://your-server:9001 to create a bucket. Then configure Supabase:
STORAGE_BACKEND=s3 GLOBAL_S3_BUCKET=supabase-storage GLOBAL_S3_ENDPOINT=http://minio:9000 REGION=us-east-1 AWS_ACCESS_KEY_ID=minioadmin AWS_SECRET_ACCESS_KEY=minioadmin123 GLOBAL_S3_FORCE_PATH_STYLE=true
Note the GLOBAL_S3_FORCE_PATH_STYLE=true—MinIO uses path-style URLs rather than virtual-hosted-style, which is the S3 default.
Configuring Backblaze B2
Backblaze B2 offers the lowest storage costs. Create a bucket and application key in your Backblaze dashboard, then configure:
STORAGE_BACKEND=s3 GLOBAL_S3_BUCKET=your-b2-bucket GLOBAL_S3_ENDPOINT=https://s3.us-west-000.backblazeb2.com REGION=us-west-000 AWS_ACCESS_KEY_ID=your-key-id AWS_SECRET_ACCESS_KEY=your-application-key
The region and endpoint vary based on which Backblaze datacenter you selected. Check your bucket details for the correct values.
Enabling the S3 Protocol Endpoint
Supabase can expose an S3-compatible endpoint at /storage/v1/s3, letting you use standard S3 tools directly. This works regardless of your storage backend—you can even use it with local file storage.
Add these variables to enable it:
S3_PROTOCOL_ACCESS_KEY_ID=your-s3-protocol-key S3_PROTOCOL_ACCESS_KEY_SECRET=your-s3-protocol-secret
Generate secure secrets:
openssl rand -hex 32
Then test with the AWS CLI:
aws s3 ls s3://your-bucket/ \ --endpoint-url https://your-domain/storage/v1/s3 \ --region your-region
Verifying Your Configuration
After restarting your Supabase services, verify the storage backend works:
- Open Supabase Studio and navigate to Storage
- Create a test bucket
- Upload a small file
- Check that the file appears in your S3 backend
For programmatic verification:
# List bucket contents via AWS CLI aws s3 ls s3://your-bucket/stub/ \ --endpoint-url https://your-s3-endpoint \ --region your-region
If you're using Supascale, the storage configuration is handled through the dashboard, eliminating manual .env editing.
Troubleshooting Common Issues
SignatureDoesNotMatch errors: The credentials or region don't match between your Supabase config and S3 provider. Double-check REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY.
Connection refused to MinIO: If using Docker, make sure you're referencing the container name (minio) not localhost in GLOBAL_S3_ENDPOINT.
Files upload but downloads fail: Check CORS configuration on your bucket. For public buckets, you may need to configure allowed origins.
Resumable uploads fail at 6MB: This usually indicates network issues between Supabase and your S3 backend, or missing TUS configuration for providers that don't support all S3 features.
Migrating Existing Files
If you've been running with local storage and want to migrate to S3, you'll need to move existing files. The safest approach:
- Configure the new S3 backend
- Use rclone to sync local storage to S3
- Update the environment variables
- Restart services
rclone sync /path/to/supabase/storage s3:your-bucket/stub/ \ --s3-provider Other \ --s3-endpoint your-endpoint
The metadata stays in PostgreSQL, so as long as file paths match, everything should work.
Next Steps
With S3 storage configured, your self-hosted Supabase deployment becomes more resilient and scalable. Consider these follow-up improvements:
- Set up automated backups that now only need to handle database exports
- Configure custom domains for cleaner storage URLs
- Review storage backup strategies for your chosen backend
For teams who'd rather skip the manual configuration, Supascale handles storage backend setup through a visual interface, along with the rest of your self-hosted Supabase infrastructure.
