
There is a kind of database pain that does not arrive dramatically. It arrives quietly.
A query runs longer than expected. A session stays occupied. Someone opens another connection just to keep moving. Then another task shows up behind it. Soon, a perfectly normal day starts to feel like too many people trying to get through one narrow doorway.
That is where pg_background becomes useful.
It lets PostgreSQL run SQL in background workers while the original session stays free. The work still happens inside the database, close to the data, but the caller no longer has to sit there waiting for every long-running step to finish. At its heart, pg_background is about giving PostgreSQL a cleaner way to handle asynchronous work without forcing teams to leave the database just to keep a session responsive.
TL;DR
pg_background lets PostgreSQL execute SQL asynchronously in background worker processes so the calling session does not stay blocked. It supports result retrieval through shared memory queues, autonomous transactions that commit independently of the caller, explicit lifecycle control such as launch, wait, cancel, detach, and list operations, and a hardened security model with a NOLOGIN role, privilege helpers, and no PUBLIC access.
Version 1.9 adds worker labels, structured error returns, result metadata, batch operations, and compatibility across PostgreSQL 14, 15, 16, 17, and 18.
What changes in day-to-day life with v1.9
Version 1.9 improves the operator experience in ways that matter during real work.
Worker labels reduce guesswork when several tasks are running at once. Structured error returns make it easier for scripts and applications to react intelligently when background work fails. Result metadata makes it possible to inspect completion state without consuming the result stream. Batch operations simplify cleanup when a session launches several workers.
Taken together, these additions make pg_background easier to live with. That is the real value of this release. It makes a useful extension more observable, more manageable, and more practical in day-to-day operations.
A real-world story: when one session becomes the bottleneck
Imagine a finance platform during quarter-end processing.
A reconciliation task needs to run. It is not unusual, but it touches a large amount of data and may take time. At the same moment, the platform is still serving users, operators are still investigating support tickets, and engineers still need room to work. If that reconciliation query runs directly in the same session that triggered it, the session stays occupied. The caller is forced to wait. The system becomes less flexible. The operator loses options.
This is where pg_background feels less like a feature and more like common sense.
Instead of forcing the original session to wait on the entire operation, the session can hand the SQL off to a PostgreSQL background worker. The work keeps running inside the server, but the caller is now free to move on, inspect progress, launch another step, or simply continue serving the rest of the workflow.
A simple analogy helps here. Think of it like placing an order in a restaurant. The waiter does not go into the kitchen and stand beside the stove until the meal is cooked. The order goes to the kitchen, the cooking happens where the equipment is, and the waiter stays free to take care of the rest of the room. pg_background works in much the same way. It lets the SQL run where the data already lives, while the original session remains useful.
That is the appeal. It is not about adding complexity. It is about creating better flow.
What pg_background is, and why teams actually use it
pg_background is a PostgreSQL extension that executes arbitrary SQL commands asynchronously in dedicated background worker processes inside the database server. Unlike client-side async patterns or workarounds that depend on separate connections, these workers run inside PostgreSQL itself and operate in independent transactions with access to local resources.
That matters for a few reasons.
First, the work stays close to the data. Second, the background worker can commit or roll back independently of the session that launched it. Third, the worker lifecycle can be managed explicitly through launch, wait, cancel, detach, and list operations.
The core capabilities are straightforward and useful:
- async SQL execution
- result retrieval through shared memory queues
- autonomous transactions
- explicit lifecycle control
- production-hardened security
Those capabilities explain why teams use pg_background in real systems rather than treating it as a novelty.
Just as importantly, pg_background is not only about sending work away. It also supports retrieving results back through shared memory queues, which means the caller can still inspect output without inventing a separate return path outside PostgreSQL’s execution model.
Autonomous transactions are one of the biggest reasons teams reach for this extension in the first place. The background worker runs in its own transaction and can commit or roll back independently of the session that launched it. That gives architects and DBAs a useful design option. An audit write, notification, or maintenance action does not always have to live and die with the caller’s transaction.
The project also maps cleanly to real production use cases:
- background maintenance such as VACUUM, ANALYZE, and REINDEX
- asynchronous audit logging
- long-running ETL pipelines
- independent notification delivery
- parallel query pattern implementation
None of that feels theoretical. It sounds like the sort of work teams actually need to get done.
Why the security model matters
Background execution is powerful, which is exactly why it needs guardrails.
No team wants “run SQL in the background” to quietly become “run anything from anywhere.” One of the reassuring aspects of pg_background is that the security model is clearly designed for production use: a NOLOGIN role-based model, SECURITY DEFINER privilege helpers, and no PUBLIC grants.
That tells a technology leader or DBA something important. This feature was not built as a shortcut around operational discipline. It was built with the assumption that background execution should be useful without becoming reckless.
When pg_background is a good fit, and when it is not
pg_background is a good fit when the work is fundamentally SQL, the data already lives in PostgreSQL, and the calling session should stay free instead of waiting on a long-running task.
It is also a strong fit when autonomous transaction behavior is useful, such as audit logging, maintenance work, or independent side effects that should commit separately from the caller. The common use cases line up neatly with that pattern: maintenance, audit logging, ETL, notifications, and parallel work patterns.
It is not the right tool if what you really need is a scheduler, a full workflow orchestration engine, or guaranteed external delivery semantics. pg_background gives you background SQL execution and lifecycle control. It does not replace calendar-driven job scheduling, and it should not be treated as a full substitute for broader workflow systems.
That distinction matters because it keeps expectations healthy. A focused tool is often more useful than a bloated one, but only if teams use it for the problem it was built to solve.
What’s new in v1.9
Version 1.9 adds four operator-friendly capabilities that make the extension easier to observe and easier to manage in daily work:
- worker labels
- structured error returns
- result metadata
- batch operations
It also formalizes compatibility across PostgreSQL 14 through 18.
Worker Labels
One of the quiet annoyances in background execution systems is that tasks can become anonymous.
A worker is running. You know it exists. You may even know its PID. But what is it actually doing? Which part of the application launched it? Is it a backfill, an audit operation, or some maintenance task someone forgot to document?
Version 1.9 adds an optional label parameter to pg_background_launch_v2() and pg_background_submit_v2(). These labels appear in pg_background_list_v2() output and can be up to 64 bytes long.
In plain language, this means you can finally give a worker a name that humans can recognize.
For a non-technical reader, it is the difference between seeing “a process is running” and seeing “the customer backfill job is running.” For engineers and DBAs, it improves observability without changing the execution model. It gives the worker intent at launch time, and that intent remains visible later when you inspect it.
A practical use case would be a production application that launches several background tasks tied to different requests or features. A label such as prod/audit/login or stage/backfill/customers immediately turns worker inspection into something understandable.
-- Launch a labeled workerSELECT *FROM pg_background_launch_v2( 'SELECT pg_sleep(10); SELECT now();', 65536, 'prod/api-42/request-8f3a') AS h;
-- Fire-and-forget with a labelSELECT *FROM pg_background_submit_v2( 'INSERT INTO audit_log(event_type, created_at) VALUES (''login'', now())', 65536, 'prod/audit/login') AS h;
-- Observe labeled workersSELECT *FROM pg_background_list_v2()AS ( pid int4, cookie int8, launched_at timestamptz, user_id oid, queue_size int4, state text, sql_preview text, last_error text, consumed bool)ORDER BY launched_at DESC;
Output:┌─────────┬──────────────────────┬───────────────────────────────┬─────────┬────────────┬─────────┬──────────────────────────────────────────────────────────────┬────────────┬──────────┐│ pid │ cookie │ launched_at │ user_id │ queue_size │ state │ sql_preview │ last_error │ consumed │├─────────┼──────────────────────┼───────────────────────────────┼─────────┼────────────┼─────────┼──────────────────────────────────────────────────────────────┼────────────┼──────────┤│ 2069536 │ -2650105714474788328 │ 2026-04-01 22:58:09.505279+00 │ 16402 │ 65536 │ stopped │ INSERT INTO audit_log(event_type, created_at) VALUES ('login…│ NULL │ f ││ │ │ │ │ │ │…', now()) │ │ ││ 2069534 │ -2772832329059085234 │ 2026-04-01 22:58:04.452824+00 │ 16402 │ 65536 │ stopped │ SELECT pg_sleep(10); SELECT now(); │ NULL │ f │└─────────┴──────────────────────┴───────────────────────────────┴─────────┴────────────┴─────────┴──────────────────────────────────────────────────────────────┴────────────┴──────────┘(2 rows)
Structured Error Returns
A good asynchronous system is not measured only by how it behaves when everything succeeds. It is also measured by how clearly it behaves when something fails.
Version 1.9 adds pg_background_error_info_v2(), which returns structured error details including:
- sqlstate
- message
- detail
- hint
- context
That is a meaningful improvement because it turns background failures into something applications and operators can inspect cleanly instead of treating every failure as the same vague event.
For engineers, this is especially useful because the failure information becomes programmatic. A workflow can look at the sqlstate, decide whether an error is retryable, log the detail, or surface the hint in a targeted way.
A good real-world example would be an asynchronous data correction or batch update. If the task fails because of a uniqueness violation, the application can inspect the structured error and decide how to proceed rather than flattening everything into “worker failed.”
-- Launch a worker that will failSELECT *FROM pg_background_launch_v2( 'SELECT 1/0;', 65536, 'dev/demo/divide-by-zero') AS h \gset
-- Inspect the error non-destructivelySELECT *FROM pg_background_error_info_v2(:pid, :cookie);
Output:
┌──────────┬──────────────────┬────────┬──────┬────────────────────────────────┐│ sqlstate │ message │ detail │ hint │ context │├──────────┼──────────────────┼────────┼──────┼────────────────────────────────┤│ 22012 │ division by zero │ NULL │ NULL │ background worker, pid 2069753 │└──────────┴──────────────────┴────────┴──────┴────────────────────────────────┘
Result Metadata Without Consuming Results
Often the first question is not “what are the full results?” The first question is much simpler.
Did the task finish? Did it fail? How many rows were affected? What kind of statement ran?
Version 1.9 adds pg_background_result_info_v2(), which returns:
- row_count
- command_tag
- completed
- has_error
The important detail is that this is non-destructive inspection. You can check the state without consuming the actual result stream.
That is extremely useful in applications, dashboards, operational scripts, and polling loops. It lets a system ask whether work is done before deciding what to do next.
A practical example might be a batch process that loads data in the background. The caller may not need every returned row. It may only need to know that the work completed and that a given number of rows were processed.
-- Launch a worker that affects rowsSELECT *FROM pg_background_launch_v2( 'CREATE TEMP TABLE t AS SELECT generate_series(1,1000) AS id; SELECT * FROM t;', 65536, 'dev/demo/result-info') AS h \gset
-- Check metadata without consuming resultsSELECT *FROM pg_background_result_info_v2(:pid, :cookie);
Output:
┌───────────┬─────────────┬───────────┬───────────┐│ row_count │ command_tag │ completed │ has_error │├───────────┼─────────────┼───────────┼───────────┤│ 1000 │ SELECT │ t │ f │└───────────┴─────────────┴───────────┴───────────┘(1 row)
Batch Operations
Sometimes the problem is not one worker. Sometimes the problem is ten.
Version 1.9 adds:
- pg_background_detach_all_v2()
- pg_background_cancel_all_v2()
These functions return counts and are designed to help manage multiple workers more cleanly.
This is also the right place to emphasize an important distinction: detaching is not canceling.
When you detach, you stop tracking the worker, but the worker may continue running.
When you cancel, you ask the worker to stop.
That distinction matters because it reflects two very different operational intentions. Sometimes the work is still valid, and you simply no longer want to track it. Other times, the work itself should stop.
-- Stop tracking all workers launched in this sessionSELECT pg_background_detach_all_v2();
-- Cancel all tracked workers launched in this sessionSELECT pg_background_cancel_all_v2();
Compatibility that makes adoption easier
Version 1.9 is tested on PostgreSQL 14, 15, 16, 17, and 18. It also includes CI coverage on Ubuntu 22.04 and 24.04.
That matters because a useful extension is much easier to adopt when it travels with the versions real teams are already using. For architects, that means fewer upgrade roadblocks. For DBAs, it means fewer unpleasant surprises. For leaders, it means the extension is keeping pace with the platform instead of becoming a version-locked side road.
Installation and upgrade
Fresh installation is simple:
-- Fresh installCREATE EXTENSION pg_background;
Upgrading from 1.8 to 1.9 is straightforward:
-- Upgrade from 1.8ALTER EXTENSION pg_background UPDATE TO '1.9';
And it is always worth verifying what is actually installed:
SELECT extversionFROM pg_extensionWHERE extname = 'pg_background';
That small verification step prevents a surprising amount of confusion later.
A practical walkthrough in psql
The most useful way to understand pg_background is to try a simple workflow in psql.
First, launch a worker and capture its handle:
SELECT *FROM pg_background_launch_v2( 'SELECT pg_sleep(5); SELECT count(*) FROM pg_class;', 65536, 'dev/demo/count-catalog') AS h \gset
Next, inspect what is currently tracked:
SELECT *FROM pg_background_list_v2()AS ( pid int4, cookie int8, launched_at timestamptz, user_id oid, queue_size int4, state text, sql_preview text, last_error text, consumed bool)ORDER BY launched_at DESC;
Then check whether the worker finished and what kind of work it performed:
SELECT *FROM pg_background_result_info_v2(:pid, :cookie);
If an error occurred, inspect it:
SELECT *FROM pg_background_error_info_v2(:pid, :cookie);
If you need a fire-and-forget pattern for side-effect work, use submit_v2() and then detach:
SELECT *FROM pg_background_submit_v2( 'INSERT INTO audit_log(event_type, created_at) VALUES (''sync_complete'', now())', 65536, 'prod/audit/sync-complete') AS h \gsetSELECT pg_background_detach_v2(:pid, :cookie);
The v2 API is the right place to start for new work. It gives you cookie-based identity, clearer control semantics, and better observability.
Patterns and best practices
The release becomes far more useful when teams turn features into habits.
Use a naming convention for labels
Worker labels are most valuable when they are consistent. A format such as feature/env/request-id works well because it captures intent, environment, and traceability in a compact way.
Examples include:
- audit/prod/req-9f8d
- backfill/stage/run-20260401
- etl/prod/customer-sync
This helps operators, application developers, and dashboards speak the same language.
Use result_info and error_info before consuming results
The new inspection functions are powerful because they do not consume the result stream. That means you can ask whether work completed or whether an error occurred before deciding what to retrieve next. For application workflows and admin tools, that is a much cleaner model than forcing every inspection step to double as final consumption.
Be explicit about detach versus cancel
This distinction deserves to live in every operational runbook.
Detach means you stop tracking the worker, but it may continue running.
Cancel means you want it to stop.
Use detach when the work is still valid and can continue without active supervision. Use cancel when the work itself should not continue.
Do not exhaust worker capacity
Background workers are useful, but they are not free. The database still has physics.
In practice, that means you should avoid launching workers in uncontrolled loops, set sensible limits, and test behavior under load. Asynchronous execution is helpful, but only when it stays within the boundaries of what the system can handle safely.
Make observability part of the workflow
Use pg_background_list_v2() regularly, not only during incidents. Correlate worker labels with request IDs, logs, and PostgreSQL activity views.
A background execution tool becomes much more trustworthy when it is easy to see what it is doing.
Closing thoughts
pg_background solves a very real PostgreSQL problem without trying to become something it is not. It gives teams a clean way to run SQL in the background, keep the calling session free, retrieve results when needed, and manage the worker lifecycle explicitly.
The broader value is easy to see: asynchronous execution, autonomous transactions, result retrieval, explicit control, hardened security, and production use cases that show up in real environments.
Version 1.9 builds on that foundation with the kinds of improvements that make daily operations smoother: labels, structured errors, result metadata, batch controls, and broader tested compatibility.
Five practical benefits for a mixed audience
- pg_background gives PostgreSQL a clean way to offload SQL into background workers without keeping the calling session blocked.
- It supports result retrieval, autonomous transactions, and explicit lifecycle control, which makes it useful for more than just “run this later.”
- Its security model is designed for production use, with NOLOGIN role-based access, privilege helpers, and no PUBLIC grants.
- v1.9 improves observability and operator experience with worker labels, structured errors, result metadata, and batch operations.
- The documented use cases make it relevant to maintenance, audit logging, ETL, notifications, and parallel work patterns, not just toy examples.
Upgrade recommendation
If you are on version 1.8, moving to version 1.9 brings meaningful operational improvements. If you are evaluating pg_background for new work, start with the v2 API and think in terms of clear labels, explicit lifecycle control, and deliberate use cases where the database really is the right place for the work to happen.
If you use it in a real workflow, share feedback, open issues, and send pull requests. Good infrastructure does not only help when things are fast. It helps when work is long, messy, and still needs to be done without drama.
Leave a comment