Running your own self-hosted Supabase means you have complete control over your PostgreSQL database—including the ability to create triggers, stored procedures, and event-driven automation that would otherwise require complex application code. Moving logic to the database layer reduces network round trips, enforces consistency across all access points, and keeps your application code focused on what matters.
This guide covers everything you need to implement triggers and stored procedures in your self-hosted Supabase deployment, from basic concepts to production-ready patterns.
Why Database Logic Matters for Self-Hosted Deployments
When you self-host Supabase, database functions become even more valuable. Unlike cloud deployments where you might worry about compute limits, your self-hosted PostgreSQL can execute complex logic without additional billing considerations.
Benefits of database-side logic:
- Performance: Logic runs where data lives—no network round trips between your application and database
- Security: Functions can operate with elevated permissions using
SECURITY DEFINER, while your application maintains limited access - Consistency: Business rules execute regardless of how data is accessed—through PostgREST, direct connections, or migrations
- Atomicity: Operations within a function run in a single transaction
The trade-off? Database logic requires PostgreSQL expertise and can be harder to test than application code. For critical business rules and data integrity, though, triggers are often the right choice.
Understanding Postgres Triggers
A trigger automatically executes a function in response to table events: INSERT, UPDATE, DELETE, or TRUNCATE. Triggers can fire BEFORE or AFTER the event, and operate on each ROW or once per STATEMENT.
Basic Trigger Anatomy
Every trigger requires two components:
- Trigger function: A PL/pgSQL function returning
TRIGGER - Trigger definition: Binds the function to a table and event
-- Step 1: Create the trigger function CREATE OR REPLACE FUNCTION update_modified_timestamp() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ LANGUAGE plpgsql; -- Step 2: Attach the trigger to a table CREATE TRIGGER set_updated_at BEFORE UPDATE ON profiles FOR EACH ROW EXECUTE FUNCTION update_modified_timestamp();
This pattern—automatically updating a timestamp—is so common that you'll find it in nearly every production database.
Row-Level vs Statement-Level Triggers
Row-level triggers (FOR EACH ROW) execute once per affected row. Inside the function, you have access to:
NEW: The new row data (forINSERTandUPDATE)OLD: The original row data (forUPDATEandDELETE)TG_OP: The operation type (INSERT,UPDATE,DELETE,TRUNCATE)
Statement-level triggers (FOR EACH STATEMENT) execute once per SQL statement, regardless of how many rows are affected. Use these for operations that don't need row-by-row access, like audit logging of bulk operations.
CREATE OR REPLACE FUNCTION log_bulk_delete() RETURNS TRIGGER AS $$ BEGIN INSERT INTO audit_log (operation, table_name, timestamp) VALUES (TG_OP, TG_TABLE_NAME, NOW()); RETURN NULL; END; $$ LANGUAGE plpgsql; CREATE TRIGGER audit_bulk_deletes AFTER DELETE ON orders FOR EACH STATEMENT EXECUTE FUNCTION log_bulk_delete();
Practical Trigger Patterns
1. Automatic Audit Trails
For compliance requirements like GDPR or SOC 2, you might need to track every change to sensitive data. This is straightforward with a self-hosted deployment where you control the schema entirely.
-- Create an audit table
CREATE TABLE audit_trail (
id BIGSERIAL PRIMARY KEY,
table_name TEXT NOT NULL,
record_id UUID,
operation TEXT NOT NULL,
old_data JSONB,
new_data JSONB,
changed_by UUID,
changed_at TIMESTAMPTZ DEFAULT NOW()
);
-- Generic audit function
CREATE OR REPLACE FUNCTION audit_changes()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO audit_trail (
table_name,
record_id,
operation,
old_data,
new_data,
changed_by
) VALUES (
TG_TABLE_NAME,
COALESCE(NEW.id, OLD.id),
TG_OP,
CASE WHEN TG_OP IN ('UPDATE', 'DELETE') THEN to_jsonb(OLD) END,
CASE WHEN TG_OP IN ('INSERT', 'UPDATE') THEN to_jsonb(NEW) END,
auth.uid()
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Attach to any table needing audit
CREATE TRIGGER audit_users_changes
AFTER INSERT OR UPDATE OR DELETE ON users
FOR EACH ROW
EXECUTE FUNCTION audit_changes();
For self-hosted deployments managing data retention requirements, this audit trail becomes essential.
2. Cascading Updates Across Tables
When one change should trigger updates elsewhere, triggers maintain consistency:
-- Update organization member counts when users join/leave
CREATE OR REPLACE FUNCTION sync_organization_member_count()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
UPDATE organizations
SET member_count = member_count + 1
WHERE id = NEW.organization_id;
RETURN NEW;
ELSIF TG_OP = 'DELETE' THEN
UPDATE organizations
SET member_count = member_count - 1
WHERE id = OLD.organization_id;
RETURN OLD;
END IF;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_org_member_count
AFTER INSERT OR DELETE ON organization_members
FOR EACH ROW
EXECUTE FUNCTION sync_organization_member_count();
3. Real-Time Notifications via pg_notify
One of the most powerful patterns for self-hosted Supabase is combining triggers with pg_notify to push changes to Supabase Realtime:
CREATE OR REPLACE FUNCTION notify_new_message()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify(
'new_message',
json_build_object(
'id', NEW.id,
'channel_id', NEW.channel_id,
'sender_id', NEW.sender_id,
'preview', LEFT(NEW.content, 100)
)::text
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER broadcast_new_messages
AFTER INSERT ON messages
FOR EACH ROW
EXECUTE FUNCTION notify_new_message();
This eliminates polling while delivering immediate client updates—particularly valuable when you're managing your own Supabase Realtime configuration.
Stored Procedures and Functions
While triggers respond to events, stored procedures (functions) encapsulate reusable logic you call explicitly. In Supabase, these functions become API endpoints automatically through PostgREST.
Creating Callable Functions
-- A function to safely transfer funds between accounts
CREATE OR REPLACE FUNCTION transfer_funds(
from_account_id UUID,
to_account_id UUID,
amount NUMERIC
)
RETURNS JSONB AS $$
DECLARE
from_balance NUMERIC;
BEGIN
-- Lock the source account row
SELECT balance INTO from_balance
FROM accounts
WHERE id = from_account_id
FOR UPDATE;
IF from_balance < amount THEN
RETURN jsonb_build_object(
'success', false,
'error', 'Insufficient funds'
);
END IF;
-- Perform the transfer
UPDATE accounts SET balance = balance - amount WHERE id = from_account_id;
UPDATE accounts SET balance = balance + amount WHERE id = to_account_id;
RETURN jsonb_build_object(
'success', true,
'new_balance', from_balance - amount
);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
Call this from your application using Supabase's RPC:
const { data, error } = await supabase.rpc('transfer_funds', {
from_account_id: 'uuid-1',
to_account_id: 'uuid-2',
amount: 100.00
});
Security Considerations
SECURITY INVOKER (default): Function runs with the permissions of the calling user. Use this for most functions.
SECURITY DEFINER: Function runs with the permissions of the function owner (usually the superuser). Use sparingly for operations that need elevated access, like:
- Inserting into audit tables the user can't directly access
- Reading from configuration tables
- Cross-schema operations
-- Always set search_path for SECURITY DEFINER functions CREATE OR REPLACE FUNCTION admin_operation() RETURNS void AS $$ BEGIN -- Function body END; $$ LANGUAGE plpgsql SECURITY DEFINER SET search_path = public, pg_temp;
The SET search_path prevents security vulnerabilities where malicious users could hijack function behavior by creating objects in other schemas.
Event Triggers for Schema Changes
Event triggers fire on database-level events like CREATE TABLE or ALTER TABLE. These are typically reserved for superusers—which you have full access to in a self-hosted deployment.
-- Log all schema changes CREATE OR REPLACE FUNCTION log_ddl_changes() RETURNS event_trigger AS $$ BEGIN INSERT INTO ddl_log (event, command, timestamp) VALUES (tg_event, current_query(), NOW()); END; $$ LANGUAGE plpgsql; CREATE EVENT TRIGGER track_ddl ON ddl_command_end EXECUTE FUNCTION log_ddl_changes();
This becomes valuable for tracking schema drift across environments or auditing who made what changes to your database structure.
Managing Triggers in Self-Hosted Supabase
Viewing Existing Triggers
The Supabase Studio dashboard shows triggers in the Database section, but you can also query them directly:
SELECT trigger_name, event_manipulation, event_object_table, action_timing, action_statement FROM information_schema.triggers WHERE trigger_schema = 'public';
Disabling Triggers Temporarily
During bulk imports or migrations, you might need to disable triggers:
-- Disable a specific trigger ALTER TABLE profiles DISABLE TRIGGER set_updated_at; -- Disable all triggers on a table ALTER TABLE profiles DISABLE TRIGGER ALL; -- Re-enable ALTER TABLE profiles ENABLE TRIGGER ALL;
Ordering Trigger Execution
When multiple triggers exist on the same table and event, they execute alphabetically by name. If order matters, prefix trigger names:
CREATE TRIGGER 01_validate_data ... CREATE TRIGGER 02_transform_data ... CREATE TRIGGER 03_notify_changes ...
Performance Considerations
Triggers add overhead to every affected operation. Keep these guidelines in mind:
- Keep trigger functions fast: Long-running triggers block the original operation
- Avoid recursion: Triggers that modify the same table can cause infinite loops
- Use AFTER triggers when possible: BEFORE triggers can reject operations, but AFTER triggers don't block the original transaction
- Monitor with pg_stat_user_functions: Track function execution times
-- Check function performance SELECT funcname, calls, total_time, mean_time FROM pg_stat_user_functions WHERE schemaname = 'public' ORDER BY total_time DESC;
For production self-hosted deployments, combine this with your monitoring setup to track trigger performance over time.
Testing Triggers Locally
Before deploying triggers to production, test them in your local Supabase environment:
# Start local Supabase supabase start # Apply migrations with triggers supabase db push # Test trigger behavior psql postgresql://postgres:postgres@localhost:54322/postgres
Write your triggers as migrations so they're version-controlled and reproducible across environments. Your CI/CD pipeline can then validate triggers in test environments before production deployment.
Common Pitfalls
Silent failures: If a trigger function throws an error, the entire operation fails. Wrap risky operations in exception handlers:
CREATE OR REPLACE FUNCTION safe_audit()
RETURNS TRIGGER AS $$
BEGIN
BEGIN
INSERT INTO audit_log (...) VALUES (...);
EXCEPTION WHEN OTHERS THEN
RAISE WARNING 'Audit failed: %', SQLERRM;
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
RLS interaction: Triggers with SECURITY DEFINER bypass Row Level Security. This is intentional but requires careful consideration of what data the trigger accesses.
Transaction scope: Triggers execute within the same transaction as the triggering statement. If the trigger fails, the entire transaction rolls back.
Wrapping Up
Postgres triggers and stored procedures give you powerful tools for data integrity, automation, and performance optimization. With a self-hosted Supabase deployment, you have full access to these capabilities without restrictions.
Start with simple patterns—automatic timestamps, basic audit logs—and expand as your needs grow. The key is putting logic where it belongs: enforce data rules at the database level, handle business logic in your application, and use triggers to bridge the gap.
If managing triggers, migrations, and database configurations feels overwhelming, Supascale handles the operational complexity of self-hosted Supabase while giving you full access to your PostgreSQL capabilities. Configure your deployment through a simple UI, set up automated backups, and focus on building your application.
