BLOGS
Send Email from Supabase Database Triggers (pg_http)
How to send emails directly from Supabase database triggers using the pg_http and pg_net extensions without Edge Functions or backend servers.
Why Send Emails from Database Triggers?
Sometimes the simplest architecture wins. Instead of wiring up Edge Functions, webhook handlers, or a separate Node.js server, you can send emails directly from PostgreSQL, the database engine in Supabase.
Supabase exposes two extensions that make HTTP requests from SQL:
| Extension | Behavior | Best for |
|---|---|---|
http (pg_http) | Synchronous — session waits for the HTTP round-trip. This means the database update is delayed until the email API request is complete. Any API latency and errors apply to the database operation. | Simple scripts, one-off calls |
pg_net | Asynchronous — Completes the database transaction before the email API request is sent. | Triggers, high-throughput tables, production workloads |
Both can call any REST API, including email providers like Pingram. This guide covers both extensions, with full trigger examples using the Pingram API.
Common use cases that fit this pattern:
- Welcome emails when a user row is inserted
- Order confirmations when a purchase is recorded
- Alert emails when a status column changes (e.g.,
payment_status = 'failed') - Activity digests triggered by
pg_cronon a schedule
If you’d rather use Edge Functions, see Send Email from Supabase Edge Functions. For a broader overview of all approaches, see 4 Ways to Send Emails with Supabase.
Prerequisites
- A Supabase project (free tier works)
- A Pingram account — free tier includes generous amount of emails per month
- Your Pingram secret API key (
pingram_sk_...from Settings → Environments in the dashboard)
1. Enable the Extensions
Open the Supabase SQL Editor and run:
create extension if not exists http with schema extensions;
create extension if not exists pg_net;
You can also enable them from the Dashboard under Database > Extensions. Search for http and pg_net and toggle them on.
2. Test a Simple HTTP Call
Before building triggers, verify the extension works by sending a test email from the SQL editor:
select
"status",
"content"::jsonb
from
extensions.http((
'POST',
'https://api.pingram.io/send',
ARRAY[
extensions.http_header('Authorization', 'Bearer PINGRAM_API_KEY')
],
'application/json',
'{
"type": "test_email",
"to": { "id": "user_1", "email": "you@example.com" },
"email": {
"subject": "Test from Supabase SQL",
"html": "<h1>It works!</h1><p>This email was sent from a Postgres function.</p>"
}
}'
)::extensions.http_request);
api.pingram.io is the US host. For Canada or EU environments, call https://api.ca.pingram.io/send or https://api.eu.pingram.io/send instead—the request shape is the same, but the host must match where your API key was issued.
Replace PINGRAM_API_KEY with your full secret key (including the pingram_sk_ prefix). If you get status: 200, the extension is working and you’re ready to build triggers.
3. Store Your API Key Securely with Vault
Hard-coding API keys in SQL is a security risk. Supabase Vault lets you store secrets encrypted and retrieve them at runtime.
select vault.create_secret(
'PINGRAM_API_KEY',
'pingram_api_key',
'Pingram API key for sending emails, sms, voice (description)'
);
Retrieve the secret inside functions:
select decrypted_secret
from vault.decrypted_secrets
where name = 'pingram_api_key';
4. Build a Trigger with pg_net (Recommended)
pg_net is the better choice for production triggers because it’s asynchronous, meaning that the Email API operation runs in the background and does not block the database transaction.
Create the trigger function
create or replace function public.send_welcome_email()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
declare
api_key text;
begin
select decrypted_secret into api_key
from vault.decrypted_secrets
where name = 'pingram_api_key';
perform net.http_post(
url := 'https://api.pingram.io/send',
body := jsonb_build_object(
'type', 'welcome_email',
'to', jsonb_build_object(
'id', new.id::text,
'email', new.email
),
'email', jsonb_build_object(
'subject', 'Welcome to our app!',
'html', '<h1>Welcome</h1>'
|| '<p>We''re glad you''re here. Here''s what to do next:</p>'
|| '<ul><li>Complete your profile</li><li>Explore the dashboard</li></ul>'
)
),
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer ' || api_key
)
);
return new;
end;
$$;
Attach it to a table
create trigger on_user_created
after insert on auth.users
for each row
execute function public.send_welcome_email();
Every new signup now fires a welcome email automatically — no server, no Edge Function, no webhook configuration.
Why set search_path = ''?
Supabase runs trigger functions under different roles depending on how the write was initiated (API, auth, dashboard). An empty search_path forces you to fully qualify every function call (net.http_post, vault.decrypted_secrets), which prevents “function not found” errors that plague many developers. This is a common issue for triggers that call net or extensions functions.
5. Adding custom variables (e.g. first_name)
Personalized email usually needs fields like the user’s first name. You can stitch them into inline HTML, hand them to a Pingram template, or read them from your own tables.
Signup metadata (raw_user_meta_data)
On auth.users, Supabase keeps extra signup fields in raw_user_meta_data (JSON). If you store first_name there (custom signup form, auth hook, etc.), read it with new.raw_user_meta_data->>'first_name'. Check Authentication → Users in the dashboard to confirm the key exists for your app.
Replace the function from section 4 with this version (same name, so the existing trigger keeps working) if you want a personalized greeting from signup metadata:
create or replace function public.send_welcome_email()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
declare
api_key text;
first_name text;
begin
select decrypted_secret into api_key
from vault.decrypted_secrets
where name = 'pingram_api_key';
first_name := new.raw_user_meta_data->>'first_name';
perform net.http_post(
url := 'https://api.pingram.io/send',
body := jsonb_build_object(
'type', 'welcome_email',
'to', jsonb_build_object(
'id', new.id::text,
'email', new.email
),
'email', jsonb_build_object(
'subject', 'Welcome to our app!',
'html', '<h1>'
|| case
when first_name is not null then 'Welcome, ' || first_name || '!'
else 'Welcome!'
end
|| '</h1>'
|| '<p>We''re glad you''re here.</p>'
)
),
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer ' || api_key
)
);
return new;
end;
$$;
If the value might contain HTML-sensitive characters and you are building HTML in SQL, sanitize or use a template instead of pasting raw strings into tags.
When the name lives in public.profiles
If first_name is only on a profile row, remember that an after insert on auth.users trigger often runs before that profile exists. Use metadata in that trigger, move the email to after insert (or after update) on public.profiles, or enqueue from the user trigger only after you know the profile row is written.
Advanced: Synchronous Triggers with http
The http extension blocks until the API responds. This means your INSERT statement won’t return until the email API call completes. It’s fine for low-traffic tables or admin operations, but avoid it on hot paths. Only use this when the Email operation is a requirement of updating/inserting the row.
create or replace function public.send_order_confirmation()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
declare
api_key text;
customer record;
response extensions.http_response;
begin
select decrypted_secret into api_key
from vault.decrypted_secrets
where name = 'pingram_api_key';
select email, first_name into customer
from public.profiles
where id = new.user_id;
if customer.email is null then
raise warning 'No email found for user %', new.user_id;
return new;
end if;
select * into response from extensions.http((
'POST',
'https://api.pingram.io/send',
ARRAY[
extensions.http_header('Authorization', 'Bearer ' || api_key)
],
'application/json',
jsonb_build_object(
'type', 'order_confirmation',
'to', jsonb_build_object(
'id', new.user_id::text,
'email', customer.email
),
'email', jsonb_build_object(
'subject', 'Order #' || new.id || ' confirmed',
'html', '<h1>Thanks, ' || customer.first_name || '!</h1>'
|| '<p>Your order total: $' || new.total::text || '</p>'
)
)::text
)::extensions.http_request);
if response.status != 200 then
raise warning 'Email API returned status %: %', response.status, response.content;
end if;
return new;
end;
$$;
create trigger on_order_created
after insert on public.orders
for each row
execute function public.send_order_confirmation();
The http extension gives you the response inline, which is useful for logging failures. With pg_net, you’d check net._http_response after the fact.
Advanced: Trigger on Column Changes
You can scope triggers to fire only when specific columns change. This is useful for status-change alerts:
create or replace function public.notify_payment_failed()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
declare
api_key text;
customer record;
begin
if new.payment_status != 'failed' or old.payment_status = 'failed' then
return new;
end if;
select decrypted_secret into api_key
from vault.decrypted_secrets
where name = 'pingram_api_key';
select email, first_name into customer
from public.profiles
where id = new.user_id;
perform net.http_post(
url := 'https://api.pingram.io/send',
body := jsonb_build_object(
'type', 'payment_failed',
'to', jsonb_build_object(
'id', new.user_id::text,
'email', customer.email
),
'email', jsonb_build_object(
'subject', 'Payment failed for order #' || new.id,
'html', '<p>Hi ' || customer.first_name || ',</p>'
|| '<p>Your payment for order #' || new.id || ' could not be processed.</p>'
|| '<p><a href="https://yourapp.com/orders/' || new.id || '/retry">Retry payment</a></p>'
)
),
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer ' || api_key
)
);
return new;
end;
$$;
create trigger on_payment_failed
after update of payment_status on public.orders
for each row
execute function public.notify_payment_failed();
The after update of payment_status clause ensures the trigger only fires when that specific column changes, not on every update to the row.
Advanced: Scheduled Emails with pg_cron
Combine pg_net with the pg_cron extension to send emails on a schedule — daily digests, weekly reports, or reminder emails:
select cron.schedule(
'daily-activity-digest',
'0 9 * * *',
$$
select net.http_post(
url := 'https://api.pingram.io/send',
body := jsonb_build_object(
'type', 'daily_digest',
'to', jsonb_build_object(
'id', 'admin_1',
'email', 'admin@yourapp.com'
),
'email', jsonb_build_object(
'subject', 'Daily Activity Digest — ' || current_date::text,
'html', '<h1>Daily Digest</h1><p>New signups: ' ||
(select count(*) from auth.users where created_at > now() - interval '1 day')::text ||
'</p>'
)
),
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', 'Bearer ' || (select decrypted_secret from vault.decrypted_secrets where name = 'pingram_api_key')
)
);
$$
);
Enable pg_cron from Database > Extensions first. Cron schedules run in UTC.
Troubleshooting
”function http_post does not exist”
Postgres is resolving an unqualified http_post against the wrong schema (or not finding it). pg_net and the http extension are different: async helpers live under net, the synchronous client under extensions.
Add set search_path = '' on the trigger function and qualify the call:
pg_net: usenet.http_post(...)(named arguments as in this guide).http(pg_http): useextensions.http_post(url, body, content_type)orextensions.http(...)with anextensions.http_request, matching the examples above.
-- pg_net (this guide)
perform net.http_post(url := 'https://...', body := '{}'::jsonb, headers := '{}'::jsonb);
-- pg_http (simple wrapper style from Supabase docs)
select * from extensions.http_post('https://...', '{}', 'application/json');
pg_net requests silently failing
pg_net requests don’t fire until the transaction commits. If your transaction rolls back (e.g., due to a constraint violation), the HTTP request is never sent. Check net._http_response for responses:
select * from net._http_response
where status_code >= 400 or error_msg is not null
order by created desc
limit 10;
Trigger not firing
Verify the trigger exists and is enabled:
select tgname, tgenabled, tgrelid::regclass
from pg_trigger
where tgname = 'on_user_created';
A value of O in tgenabled means the trigger fires in normal operation. D means disabled.
Vault secret not found
Make sure you created the secret with the exact name you’re referencing. List all secrets:
select name from vault.decrypted_secrets;
When Database Triggers Aren’t the Right Fit
Database triggers work well for simple, event-driven emails tied to row changes. They’re not ideal for:
- Complex email templates — building HTML in SQL is painful. Send emails using Pingram from a backend or Edge Functions when you need complex HTML.
- Multi-step workflows — if you need to fetch data from external APIs, apply business logic, and then send, an Edge Function or server route gives you more flexibility.
- High-volume batch sends — triggers fire per row. For bulk email campaigns, use a server-side job queue instead.
For these cases, see our blog post: 4 Ways to Send Emails with Supabase.