If you're running self-hosted Supabase, you've likely discovered that scheduling background tasks works differently than on the managed platform. Tasks like cleaning up old data, sending periodic notifications, or syncing records require a working cron setup—and getting pg_cron to function correctly in a Docker environment has its own set of challenges.
This guide walks you through configuring pg_cron for self-hosted Supabase, setting up scheduled jobs, and solving the networking issues that trip up most developers.
Understanding pg_cron in Supabase
pg_cron is a PostgreSQL extension that brings cron-style scheduling directly into your database. Instead of relying on external schedulers like systemd timers or Kubernetes CronJobs, your database handles the scheduling internally. This approach eliminates network latency when executing SQL queries and keeps your job configuration version-controlled alongside your schema.
Supabase bundles pg_cron alongside pg_net (for HTTP requests), enabling you to:
- Run SQL snippets on a schedule
- Call database functions at specific intervals
- Trigger Edge Functions via HTTP
- Send webhooks to external services
- Clean up expired data automatically
The extension stores all job definitions in the cron.job table and logs execution history, making debugging straightforward once everything is working.
Enabling pg_cron in Self-Hosted Supabase
Before creating scheduled jobs, you need to enable the required extensions. Connect to your database and run:
create extension if not exists pg_cron; create extension if not exists pg_net;
On self-hosted deployments, pg_cron should already be available in the Supabase Postgres image. If you're using a custom PostgreSQL setup, ensure you've installed the extension and configured shared_preload_libraries in your postgresql.conf:
shared_preload_libraries = 'pg_cron' cron.database_name = 'postgres'
After making configuration changes, restart your PostgreSQL container for the settings to take effect.
Creating Your First Scheduled Job
Once the extensions are enabled, you can schedule jobs using standard cron syntax. Here's an example that cleans up expired sessions every hour:
select cron.schedule(
'cleanup-expired-sessions', -- job name
'0 * * * *', -- every hour at minute 0
$$delete from auth.sessions
where expires_at < now() - interval '7 days'$$
);
The cron syntax follows the standard five-field format:
┌───────────── minute (0-59) │ ┌───────────── hour (0-23) │ │ ┌───────────── day of month (1-31) │ │ │ ┌───────────── month (1-12) │ │ │ │ ┌───────────── day of week (0-6) │ │ │ │ │ * * * * *
For more frequent execution, Postgres 15.1.1.61 and later support second-level scheduling:
select cron.schedule(
'frequent-check',
'*/30 * * * * *', -- every 30 seconds
$$select process_queue()$$
);
Calling Edge Functions from Cron Jobs
This is where self-hosted setups diverge significantly from the managed platform. On Supabase Cloud, there's a UI that lists available Edge Functions for cron integration. On self-hosted instances, you need to manually construct HTTP requests using pg_net.
The critical difference: you cannot use localhost or 127.0.0.1 in your cron job URLs. The database container will try to reach itself, not your Edge Functions runtime.
Correct Network Configuration
You need to use Docker's internal networking. If you're following the standard docker-compose setup, your services communicate through Docker's internal DNS:
select cron.schedule(
'call-edge-function',
'*/5 * * * *',
$$
select net.http_post(
url := 'http://kong:8000/functions/v1/my-function',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer YOUR_SERVICE_ROLE_KEY'
),
body := jsonb_build_object('action', 'scheduled-task')
);
$$
);
Notice we're using kong:8000 (the container name and internal port) rather than your public URL. This keeps traffic inside the Docker network and avoids certificate validation issues.
If you're getting "Couldn't resolve host name" errors, try using the container's internal IP address instead:
# Find your kong container's internal IP docker inspect supabase-kong | grep IPAddress
Then use that IP in your pg_net calls:
select net.http_post(
url := 'http://172.18.0.5:8000/functions/v1/my-function',
-- rest of config
);
Common Self-Hosted Cron Issues
Based on GitHub issues and community discussions, here are the problems developers encounter most frequently:
1. Jobs Not Executing
First, verify the pg_cron worker is actually running:
select * from pg_stat_activity where backend_type = 'pg_cron scheduler';
If this returns no rows, the worker has crashed. On self-hosted deployments, you'll need to restart the PostgreSQL container:
docker restart supabase-db
On managed Supabase, you'd initiate a "fast reboot" from the dashboard. For self-hosted, restarting the container is your equivalent.
2. Logs Not Appearing in Studio
Many self-hosted users report that cron logs don't appear in the Studio interface even when jobs are running successfully. This is a known limitation with the Analytics service in self-hosted setups.
As a workaround, query the execution history directly:
select * from cron.job_run_details order by start_time desc limit 20;
This table shows you:
- Job execution times
- Return messages (success or error details)
- Whether the job completed or failed
3. HTTP Requests Timing Out
pg_net has a default timeout that may be too short for your Edge Functions. You can increase it:
select net.http_post(
url := 'http://kong:8000/functions/v1/slow-function',
headers := '{"Content-Type": "application/json"}'::jsonb,
timeout_milliseconds := 30000 -- 30 second timeout
);
4. Authentication Failures
When calling Edge Functions or authenticated endpoints, you need to include proper authentication headers. Use your service role key (not the anon key) for server-to-server communication:
select cron.schedule(
'authenticated-call',
'0 */6 * * *', -- every 6 hours
$$
select net.http_post(
url := 'http://kong:8000/functions/v1/admin-task',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6...'
)
);
$$
);
Store sensitive keys in database settings rather than hardcoding them:
alter database postgres set app.service_role_key = 'your-key-here';
-- Then reference it in your job:
select cron.schedule(
'secure-call',
'0 0 * * *',
$$
select net.http_post(
url := 'http://kong:8000/functions/v1/daily-report',
headers := jsonb_build_object(
'Authorization', 'Bearer ' || current_setting('app.service_role_key')
)
);
$$
);
Managing Scheduled Jobs
Listing All Jobs
select jobid, jobname, schedule, command, nodename, active from cron.job;
Disabling a Job Temporarily
select cron.alter_job(
job_id := (select jobid from cron.job where jobname = 'cleanup-expired-sessions'),
active := false
);
Removing a Job
select cron.unschedule('cleanup-expired-sessions');
Viewing Recent Execution History
select
j.jobname,
d.status,
d.return_message,
d.start_time,
d.end_time,
d.end_time - d.start_time as duration
from cron.job_run_details d
join cron.job j on d.jobid = j.jobid
order by d.start_time desc
limit 50;
Practical Use Cases
Here are common scheduling patterns for self-hosted Supabase:
Cleanup Old Records
select cron.schedule(
'cleanup-old-logs',
'0 3 * * *', -- 3 AM daily
$$delete from app_logs where created_at < now() - interval '30 days'$$
);
Refresh Materialized Views
select cron.schedule(
'refresh-analytics',
'*/15 * * * *', -- every 15 minutes
$$refresh materialized view concurrently user_analytics_mv$$
);
Database Vacuum
select cron.schedule(
'weekly-vacuum',
'0 4 * * 0', -- Sunday at 4 AM
$$vacuum analyze$$
);
Send Scheduled Notifications
select cron.schedule(
'daily-digest',
'0 9 * * *', -- 9 AM daily
$$
select net.http_post(
url := 'http://kong:8000/functions/v1/send-daily-digest',
headers := '{"Content-Type": "application/json"}'::jsonb
);
$$
);
Limitations to Know
pg_cron has some constraints you should be aware of:
- Concurrent job limit: Maximum 32 concurrent jobs, each using a database connection
- Single database: Jobs run in one database only (configured in
cron.database_name) - No distributed locking: If you're running multiple Postgres replicas, pg_cron will execute on each one unless you implement your own locking
- Memory usage: Long-running jobs consume connections from your pool
For high-availability setups, consider using external orchestration (Kubernetes CronJobs, Temporal, etc.) or implementing advisory locks in your job logic to prevent duplicate execution across replicas.
Simplifying Cron Management with Supascale
While pg_cron is powerful, managing it through SQL commands and debugging network issues can be tedious. Supascale provides a visual interface for self-hosted Supabase management, including monitoring capabilities that help you track scheduled job execution across multiple projects.
With Supascale's one-time purchase model, you get unlimited projects without per-seat licensing—making it practical to run separate Supabase instances for development, staging, and production environments.
Conclusion
Scheduling in self-hosted Supabase requires understanding Docker networking and pg_cron's behavior within containers. The main gotchas—using internal container names instead of localhost, querying execution history directly when Studio logs fail, and properly authenticating HTTP requests—are solvable once you know what to look for.
The pg_cron extension offers genuine value: in-database scheduling eliminates external dependencies and keeps your automation logic alongside your schema. For teams already managing self-hosted Supabase, it's a natural fit.
If you're still evaluating whether self-hosting is right for your project, check out our cost comparison to understand the full picture.
