Background Jobs and Queues for Self-Hosted Supabase with pgmq

Set up durable message queues and background job processing for self-hosted Supabase using pgmq, pg_cron, and Edge Functions.

Cover Image for Background Jobs and Queues for Self-Hosted Supabase with pgmq

If you've ever needed to send emails, process webhooks, generate reports, or handle any task that shouldn't block your user's request, you know the pain of setting up background job infrastructure. Redis, RabbitMQ, SQS—they all work, but they add operational complexity and another service to manage.

With self-hosted Supabase, you can now handle background jobs directly in Postgres using pgmq, a lightweight message queue extension. No additional infrastructure required. This guide walks you through setting up durable message queues with guaranteed delivery for your self-hosted instance.

Why pgmq Over External Queue Services?

Traditional background job setups require a separate queue service (Redis, RabbitMQ) plus a worker process to consume jobs. That means more containers to run, more services to monitor, and more failure points.

pgmq takes a different approach: it implements message queues as native Postgres tables and functions. Here's why this matters for self-hosted deployments:

Operational simplicity: No additional services. Your queue lives in the same database you're already backing up and monitoring.

Transactional guarantees: Queue operations participate in Postgres transactions. Insert a record and queue a notification in one atomic operation—both succeed or both fail.

Familiar tooling: Standard SQL queries, existing Postgres monitoring, your current backup strategy. The pgmq tables are just tables.

Cost efficiency: You're already running Postgres. Adding queue functionality costs nothing extra in infrastructure.

The trade-off? pgmq won't match the throughput of dedicated queue services for extremely high-volume scenarios. But for most applications—thousands of messages per minute—it's more than sufficient.

Prerequisites for Self-Hosted pgmq

Before setting up queues, verify your self-hosted instance meets these requirements:

PostgreSQL version: pgmq requires Postgres 15.6.1.143 or later. Check your version:

SELECT version();

pg_cron extension: For scheduled job processing, you'll need pg_cron enabled. This should be included in the standard Supabase Postgres image.

Edge Functions (optional): If you want to use Supabase Edge Functions as workers, ensure you've deployed the Edge Functions runtime. See our Edge Functions setup guide for details.

If you're using Supascale to manage your self-hosted instance, these extensions are pre-configured in the standard deployment.

Enabling pgmq on Self-Hosted Supabase

First, enable the pgmq extension in your database:

CREATE EXTENSION IF NOT EXISTS pgmq;

Verify the installation:

SELECT extname, extversion 
FROM pg_extension 
WHERE extname = 'pgmq';

The extension creates a pgmq schema containing all the queue management functions.

Creating Your First Queue

Create a queue for processing background tasks:

SELECT pgmq.create('email_notifications');

This creates two tables in the pgmq schema:

  • pgmq.q_email_notifications — holds active messages
  • pgmq.a_email_notifications — stores archived messages (for auditing)

Queue Configuration Options

For high-throughput scenarios, create an unlogged queue (faster writes, but messages don't survive crashes):

SELECT pgmq.create_unlogged('analytics_events');

For partitioned queues that automatically manage old messages:

SELECT pgmq.create_partitioned(
  'user_activity',
  '1 day'::interval,  -- partition interval
  '7 days'::interval  -- retention period
);

Working with Messages

Sending Messages

Add a message to the queue:

SELECT pgmq.send(
  'email_notifications',
  '{"to": "[email protected]", "template": "welcome", "data": {"name": "Alex"}}'::jsonb
);

Send multiple messages in one call:

SELECT pgmq.send_batch(
  'email_notifications',
  ARRAY[
    '{"to": "[email protected]", "template": "digest"}'::jsonb,
    '{"to": "[email protected]", "template": "digest"}'::jsonb
  ]
);

Schedule a message for later delivery:

SELECT pgmq.send(
  'email_notifications',
  '{"to": "[email protected]", "template": "reminder"}'::jsonb,
  INTERVAL '1 hour'  -- delay before visibility
);

Reading and Processing Messages

Fetch a message for processing with a visibility timeout:

SELECT * FROM pgmq.read(
  'email_notifications',
  30,  -- visibility timeout in seconds
  1    -- number of messages to read
);

The visibility timeout is critical: once read, the message becomes invisible to other consumers for that duration. If your worker crashes or fails to delete the message, it automatically becomes visible again after the timeout—this is your retry mechanism.

Acknowledging Processed Messages

After successfully processing a message, delete it:

SELECT pgmq.delete('email_notifications', 12345);  -- message_id

Or archive it for audit purposes:

SELECT pgmq.archive('email_notifications', 12345);

Building a Worker with pg_cron

For self-hosted deployments, pg_cron provides a reliable way to process queue messages on a schedule. This approach doesn't require external workers—everything runs inside Postgres.

First, enable pg_cron:

CREATE EXTENSION IF NOT EXISTS pg_cron;

Create a function to process email notifications:

CREATE OR REPLACE FUNCTION process_email_queue()
RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
  msg RECORD;
  result BOOLEAN;
BEGIN
  -- Process up to 10 messages per run
  FOR msg IN 
    SELECT * FROM pgmq.read('email_notifications', 60, 10)
  LOOP
    BEGIN
      -- Call your email service via pg_net
      SELECT INTO result net.http_post(
        url := 'https://api.sendgrid.com/v3/mail/send',
        headers := jsonb_build_object(
          'Authorization', 'Bearer ' || current_setting('app.sendgrid_key'),
          'Content-Type', 'application/json'
        ),
        body := msg.message
      );
      
      -- Success: archive the message
      PERFORM pgmq.archive('email_notifications', msg.msg_id);
      
    EXCEPTION WHEN OTHERS THEN
      -- Log error, message will retry after visibility timeout
      RAISE WARNING 'Failed to process message %: %', msg.msg_id, SQLERRM;
    END;
  END LOOP;
END;
$$;

Schedule the worker to run every minute:

SELECT cron.schedule(
  'process-email-queue',
  '* * * * *',  -- every minute
  'SELECT process_email_queue()'
);

Check your scheduled jobs:

SELECT * FROM cron.job;

For more details on cron configuration, see our comprehensive cron jobs guide.

Edge Function Workers for Complex Tasks

For tasks that need more compute time or external dependencies, use Supabase Edge Functions as workers. This is particularly useful for:

  • Long-running operations (Edge Functions support up to 150 seconds)
  • Tasks requiring npm packages
  • Operations that need to call multiple external APIs

Create an Edge Function worker:

// supabase/functions/process-queue/index.ts
import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  Deno.env.get('SUPABASE_URL')!,
  Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
)

Deno.serve(async () => {
  // Read messages from queue
  const { data: messages, error } = await supabase
    .rpc('pgmq_read', {
      queue_name: 'heavy_tasks',
      visibility_timeout: 120,
      batch_size: 5
    })
  
  if (error || !messages?.length) {
    return new Response('No messages', { status: 200 })
  }
  
  for (const msg of messages) {
    try {
      // Process your task
      await processHeavyTask(msg.message)
      
      // Archive on success
      await supabase.rpc('pgmq_archive', {
        queue_name: 'heavy_tasks',
        message_id: msg.msg_id
      })
    } catch (err) {
      console.error(`Failed: ${msg.msg_id}`, err)
      // Message will retry after visibility timeout
    }
  }
  
  return new Response('OK', { status: 200 })
})

Trigger this function with pg_cron via pg_net:

SELECT cron.schedule(
  'trigger-heavy-tasks-worker',
  '*/2 * * * *',  -- every 2 minutes
  $$
  SELECT net.http_post(
    'https://your-supabase-url.com/functions/v1/process-queue',
    '{}',
    headers := '{"Authorization": "Bearer YOUR_ANON_KEY"}'
  )
  $$
);

Queue Monitoring and Observability

Check Queue Depth

Monitor how many messages are waiting:

SELECT pgmq.metrics('email_notifications');

This returns queue length, oldest message age, and other useful metrics.

Create a Monitoring View

Build a dashboard-friendly view across all queues:

CREATE VIEW queue_status AS
SELECT 
  queue_name,
  queue_length,
  newest_msg_age_sec,
  oldest_msg_age_sec,
  total_messages,
  scrape_time
FROM pgmq.metrics_all();

Alert on Queue Backup

Create a function to alert when queues back up:

CREATE OR REPLACE FUNCTION check_queue_health()
RETURNS void AS $$
DECLARE
  metrics RECORD;
BEGIN
  FOR metrics IN SELECT * FROM pgmq.metrics_all() LOOP
    IF metrics.queue_length > 1000 THEN
      -- Send alert via your preferred method
      PERFORM pg_notify(
        'queue_alert',
        json_build_object(
          'queue', metrics.queue_name,
          'length', metrics.queue_length
        )::text
      );
    END IF;
  END LOOP;
END;
$$ LANGUAGE plpgsql;

For comprehensive monitoring setup including Prometheus integration, check our observability guide.

Common Patterns and Best Practices

Dead Letter Queues

Create a dead letter queue for messages that repeatedly fail:

SELECT pgmq.create('email_notifications_dlq');

CREATE OR REPLACE FUNCTION move_to_dlq()
RETURNS TRIGGER AS $$
BEGIN
  -- If message has been read more than 5 times, move to DLQ
  IF NEW.read_ct > 5 THEN
    PERFORM pgmq.send(
      'email_notifications_dlq',
      NEW.message || jsonb_build_object(
        'original_queue', 'email_notifications',
        'failure_count', NEW.read_ct
      )
    );
    PERFORM pgmq.delete('email_notifications', NEW.msg_id);
  END IF;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

Transactional Enqueueing

Ensure queue messages are only created when the parent transaction commits:

BEGIN;
  -- Create the order
  INSERT INTO orders (user_id, total) 
  VALUES (123, 99.99) 
  RETURNING id INTO order_id;
  
  -- Queue the confirmation email (only sent if order insert succeeds)
  SELECT pgmq.send(
    'email_notifications',
    jsonb_build_object(
      'type', 'order_confirmation',
      'order_id', order_id,
      'user_id', 123
    )
  );
COMMIT;

Priority Queues

Simulate priority by using multiple queues:

SELECT pgmq.create('tasks_high');
SELECT pgmq.create('tasks_normal');
SELECT pgmq.create('tasks_low');

-- Worker processes high priority first
CREATE OR REPLACE FUNCTION process_tasks()
RETURNS void AS $$
DECLARE
  msg RECORD;
BEGIN
  -- Try high priority first
  SELECT * INTO msg FROM pgmq.read('tasks_high', 30, 1);
  
  IF msg IS NULL THEN
    SELECT * INTO msg FROM pgmq.read('tasks_normal', 30, 1);
  END IF;
  
  IF msg IS NULL THEN
    SELECT * INTO msg FROM pgmq.read('tasks_low', 30, 1);
  END IF;
  
  IF msg IS NOT NULL THEN
    -- Process the message
    PERFORM process_task(msg.message);
    PERFORM pgmq.delete(msg.queue_name, msg.msg_id);
  END IF;
END;
$$ LANGUAGE plpgsql;

Backup Considerations

pgmq tables are regular Postgres tables, so they're included in your standard database backups. However, consider:

Active messages: A backup taken mid-processing may capture messages that were already processed but not yet deleted. Your workers should be idempotent to handle this.

Archive tables: If you're archiving messages for audit purposes, these tables can grow large. Consider partitioned queues with retention policies, or periodic cleanup:

-- Clean up archived messages older than 30 days
DELETE FROM pgmq.a_email_notifications 
WHERE archived_at < NOW() - INTERVAL '30 days';

With Supascale's automated backups, your queue data is protected alongside the rest of your database.

Performance Tuning

For high-throughput scenarios:

Batch operations: Always use send_batch and read multiple messages per call to reduce round trips.

Appropriate visibility timeouts: Set timeouts based on expected processing time plus buffer. Too short causes unnecessary retries; too long delays retry after failures.

Index tuning: The default indexes work well for most cases, but if you're doing custom queries on message content, consider adding indexes on message JSONB fields.

Connection pooling: Use PgBouncer for workers making many short-lived connections.

When to Use External Queue Services Instead

pgmq is excellent for most use cases, but consider external services when:

  • You need more than 10,000+ messages per second sustained throughput
  • You require cross-region message replication
  • You need features like FIFO guarantees with deduplication
  • Your messages need to survive complete database failure

For these scenarios, AWS SQS, Google Cloud Pub/Sub, or managed RabbitMQ may be more appropriate.

Conclusion

pgmq brings production-grade message queuing to self-hosted Supabase without additional infrastructure. Combined with pg_cron and Edge Functions, you have a complete background job system running entirely within your existing stack.

The key advantages for self-hosted deployments:

  • No additional services to deploy or manage
  • Queues are backed up with your database automatically
  • Standard Postgres tools for monitoring and debugging
  • Transactional guarantees you can't get with external queues

Ready to simplify your self-hosted Supabase operations? Supascale provides automated backups, custom domains, and OAuth configuration—handling the operational complexity so you can focus on building your application.

Further Reading