If you've deployed self-hosted Supabase, you've probably already configured Row Level Security (RLS) for your database tables. But here's what catches many developers off guard: Storage has its own separate RLS system. Your carefully crafted database policies don't automatically protect your files.
By default, Supabase Storage blocks all uploads unless you explicitly create RLS policies on the storage.objects table. This is actually a secure default, but it means you need to understand how Storage RLS works to build functional file upload features.
This guide covers everything you need to secure file storage in your self-hosted Supabase deployment—from basic policies to advanced patterns for multi-tenant applications.
How Storage RLS Differs from Database RLS
Supabase Storage uses PostgreSQL to store file metadata in a dedicated storage schema. The actual files live in your configured backend (local filesystem or S3-compatible storage), but all access control happens through RLS policies on two tables:
storage.buckets- Controls who can create, list, or modify bucketsstorage.objects- Controls who can upload, download, update, or delete files
The key difference from regular table RLS: Storage policies must account for folder paths, file extensions, and the relationship between users and files—not just row ownership.
Understanding Public vs Private Buckets
Before writing policies, understand the bucket types:
Private Buckets (default):
- All operations require RLS policy authorization
- Downloads require a signed URL or authenticated request
- Ideal for user documents, private uploads, internal files
Public Buckets:
- Anyone with the URL can download files
- Uploads, updates, and deletes still require RLS policies
- Ideal for profile avatars, public assets, marketing images
Setting a bucket to public only bypasses download authentication—you still need INSERT policies for uploads.
Essential RLS Policies for Common Scenarios
User-Owned File Storage
The most common pattern: users can only access their own files. This requires organizing files by user ID in the folder structure.
-- Allow users to upload files to their own folder CREATE POLICY "Users can upload to own folder" ON storage.objects FOR INSERT TO authenticated WITH CHECK ( bucket_id = 'user-files' AND (storage.foldername(name))[1] = auth.uid()::text ); -- Allow users to view their own files CREATE POLICY "Users can view own files" ON storage.objects FOR SELECT TO authenticated USING ( bucket_id = 'user-files' AND (storage.foldername(name))[1] = auth.uid()::text ); -- Allow users to delete their own files CREATE POLICY "Users can delete own files" ON storage.objects FOR DELETE TO authenticated USING ( bucket_id = 'user-files' AND (storage.foldername(name))[1] = auth.uid()::text );
The storage.foldername() helper returns an array of folder path segments. By checking [1] (the first folder), you enforce that files live under a user-specific directory like user-files/{user_id}/document.pdf.
Public Avatar Uploads
For public profile pictures where anyone can view but only owners can modify:
-- Anyone can view avatars CREATE POLICY "Public avatar access" ON storage.objects FOR SELECT TO public USING (bucket_id = 'avatars'); -- Users can upload their own avatar CREATE POLICY "Users upload own avatar" ON storage.objects FOR INSERT TO authenticated WITH CHECK ( bucket_id = 'avatars' AND name = auth.uid()::text || '/' || storage.filename(name) ); -- Users can update their own avatar CREATE POLICY "Users update own avatar" ON storage.objects FOR UPDATE TO authenticated USING ( bucket_id = 'avatars' AND (storage.foldername(name))[1] = auth.uid()::text );
Team-Based File Access
For applications with team workspaces, you'll need to join against your team membership table:
-- Team members can view team files
CREATE POLICY "Team members view files"
ON storage.objects
FOR SELECT
TO authenticated
USING (
bucket_id = 'team-files' AND
EXISTS (
SELECT 1 FROM team_members
WHERE team_members.team_id = (storage.foldername(name))[1]::uuid
AND team_members.user_id = auth.uid()
)
);
-- Team members can upload to team folders
CREATE POLICY "Team members upload files"
ON storage.objects
FOR INSERT
TO authenticated
WITH CHECK (
bucket_id = 'team-files' AND
EXISTS (
SELECT 1 FROM team_members
WHERE team_members.team_id = (storage.foldername(name))[1]::uuid
AND team_members.user_id = auth.uid()
)
);
This pattern is essential for multi-tenant applications where data isolation between organizations is critical.
Storage Helper Functions Reference
Supabase provides three helper functions for writing Storage RLS policies:
| Function | Purpose | Example |
|---|---|---|
storage.filename(name) | Returns the file name | avatar.png |
storage.foldername(name) | Returns array of folders | ['users', 'abc123'] |
storage.extension(name) | Returns file extension | png |
Use these to enforce file type restrictions, organize by user/team, or implement custom folder structures.
Restricting File Types
-- Only allow image uploads
CREATE POLICY "Images only"
ON storage.objects
FOR INSERT
TO authenticated
WITH CHECK (
bucket_id = 'images' AND
storage.extension(name) IN ('jpg', 'jpeg', 'png', 'gif', 'webp')
);
Enforcing File Size Limits
Storage doesn't have a built-in RLS function for file size, but you can set this in your self-hosted configuration through the FILE_SIZE_LIMIT environment variable in your Storage service configuration.
Self-Hosted Specific Considerations
When running self-hosted Supabase, Storage RLS works the same as the hosted platform, but there are some deployment-specific details to keep in mind.
S3 Backend Configuration
If you've configured S3-compatible storage for your self-hosted deployment, RLS policies still apply. The storage.objects table stores metadata regardless of where actual files are stored.
# docker-compose.yml storage service STORAGE_BACKEND: s3 STORAGE_S3_BUCKET: your-bucket-name STORAGE_S3_REGION: us-east-1
Service Key Access
Your self-hosted deployment has a service role key that bypasses all RLS policies. Use this for server-side operations where you need admin access:
// Server-side only - never expose in client code
const supabase = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY
);
// This bypasses RLS
const { data } = await supabase.storage
.from('private-bucket')
.download('any/file/path.pdf');
For client-side operations, always use the anon key which respects RLS policies.
Testing Policies Locally
Before deploying policy changes to production, test them in your local environment:
# Start local Supabase supabase start # Apply migrations with your policies supabase db push
Then test uploads with different user contexts using the Supabase client library.
Debugging Storage RLS Issues
When uploads fail with permission errors, here's how to diagnose:
Check Policy Existence
SELECT * FROM pg_policies WHERE schemaname = 'storage' AND tablename = 'objects';
Test Policy Logic
-- Simulate a user context
SET LOCAL ROLE authenticated;
SET LOCAL request.jwt.claims = '{"sub": "user-uuid-here"}';
-- Test if a specific path would be allowed
SELECT *
FROM storage.objects
WHERE bucket_id = 'user-files'
AND name = 'user-uuid-here/test.pdf';
Common Issues
- Forgot INSERT policy: Uploads need INSERT permission on
storage.objects - Wrong folder structure: Your policy expects
/user_id/file.extbut client uploads to/file.ext - Bucket doesn't exist: Create the bucket first via Studio or SQL
- Using UPDATE instead of INSERT: New uploads require INSERT; overwriting requires both SELECT and UPDATE
Advanced Patterns
Hierarchical Folder Permissions
Supabase Storage treats "folders" as key prefixes—there's no actual folder hierarchy. For inherited permissions (like granting access to all subfolders), you need custom logic:
CREATE POLICY "Access folder and subfolders"
ON storage.objects
FOR SELECT
TO authenticated
USING (
bucket_id = 'documents' AND
name LIKE (
SELECT folder_path || '%'
FROM folder_permissions
WHERE user_id = auth.uid()
)
);
Temporary Access with Expiring Tokens
For sharing files temporarily without making buckets public:
// Generate a signed URL that expires
const { data } = await supabase.storage
.from('private-files')
.createSignedUrl('path/to/file.pdf', 3600); // 1 hour expiry
This works with private buckets and doesn't require modifying RLS policies.
Putting It All Together
A production Storage setup typically involves multiple buckets with different security models:
-- Create buckets
INSERT INTO storage.buckets (id, name, public)
VALUES
('avatars', 'avatars', true),
('user-documents', 'user-documents', false),
('team-files', 'team-files', false);
-- Public avatars - anyone views, owner uploads
CREATE POLICY "Avatar public read" ON storage.objects
FOR SELECT TO public USING (bucket_id = 'avatars');
CREATE POLICY "Avatar owner write" ON storage.objects
FOR INSERT TO authenticated
WITH CHECK (bucket_id = 'avatars' AND (storage.foldername(name))[1] = auth.uid()::text);
-- Private user documents
CREATE POLICY "User docs owner only" ON storage.objects
FOR ALL TO authenticated
USING (bucket_id = 'user-documents' AND (storage.foldername(name))[1] = auth.uid()::text)
WITH CHECK (bucket_id = 'user-documents' AND (storage.foldername(name))[1] = auth.uid()::text);
-- Team files with membership check
CREATE POLICY "Team files member access" ON storage.objects
FOR ALL TO authenticated
USING (
bucket_id = 'team-files' AND
EXISTS (SELECT 1 FROM team_members WHERE team_id = (storage.foldername(name))[1]::uuid AND user_id = auth.uid())
)
WITH CHECK (
bucket_id = 'team-files' AND
EXISTS (SELECT 1 FROM team_members WHERE team_id = (storage.foldername(name))[1]::uuid AND user_id = auth.uid())
);
Next Steps
Storage RLS is one piece of the broader security hardening process for self-hosted Supabase. Once your file access is locked down, consider:
- Setting up automated storage backups to protect uploaded files
- Configuring custom domains with SSL for secure file URLs
- Reviewing your database RLS policies for consistency
If managing all these security configurations sounds complex, Supascale provides a UI for managing self-hosted Supabase deployments including storage configuration, making it easier to maintain secure file storage without diving into Docker and SQL every time. Check out the pricing for details on what's included.
