Execution library
Curated hub for C++ execution: parallel algorithm policies today, sender/scheduler execution in C++26, and boundaries to algorithms, threads, and futures.
The C++ execution area currently spans two distinct layers. The established C++17 layer is the execution-policy model used by parallel algorithms. The newer C++26 layer is the sender/scheduler execution library, which introduces composable execution objects such as just, then, on, and when_all. This hub makes that split explicit so you can choose the right surface first.
# Start Here
Use parallel algorithm execution policies
Start here when you already have a standard algorithm and need to choose between `seq`, `par`, `par_unseq`, or `unseq` execution behavior.
execution policy tags · is_execution_policy · algorithms hub
Create sender-based execution pipelines
Use the sender route when the problem is composing asynchronous work through values, completion channels, and schedulers rather than choosing an algorithm policy.
Work with schedulers and run loops
Choose this route when execution resources and scheduler hand-off are the main abstraction boundary.
Transform or branch completion flows
Use the sender-adaptor route when you need value transformation, error handling, stop handling, or environment-aware composition.
then · let_value · upon_error · read_env
Combine multiple asynchronous operations
Start here when your execution problem involves fan-out, joining results, eager start, or mapping stop into optional/error channels.
when_all · ensure_started · stopped_as_optional · stopped_as_error
Use older async/future-based execution
If your codebase still models async work around futures and `std::async`, start from the thread support side and treat execution senders as a newer adjacent model.
# Execution Model Map
| If you need to... | Start with | Why | Common adjacent route |
|---|---|---|---|
| Run an existing standard algorithm under a policy | execution policies | The policy model modifies how algorithms execute; it does not define a general execution graph abstraction. | algorithms |
| Construct an execution pipeline from values and continuations | just, then, let_value | The sender model is about composable execution objects and completion channels. | future |
| Bind work to a scheduler or execution resource | scheduler and on | This is the route for scheduler-aware composition rather than raw thread ownership. | thread support |
| Combine multiple asynchronous operations into one result | when_all | `when_all` is the main joining/composition surface in the sender-based model. | async |
| Map stop or error completion into a different output model | stopped_as_error, stopped_as_optional, upon_error | These adaptors shape completion behavior rather than doing the underlying work themselves. | diagnostics |
| Use the older result-carrying async model | async and future | That model remains important, but it belongs primarily to thread support rather than the newer execution library. | thread support |
# Execution Families
| Family | Core destinations | Use it for |
|---|---|---|
| Execution policies | policy tags, is_execution_policy, <execution> | Choosing sequential, parallel, vectorized, or combined execution modes for standard algorithms. |
| Sender factories and completion seeds | just, just_error, just_stopped | Creating sender-based execution values that represent normal, error, or stopped completion. |
| Sender adaptors and transformation | then, let_value, let_error, let_stopped, upon_error, upon_stopped | Transforming values, errors, and stopped completion into the next stage of a pipeline. |
| Scheduling and resource hand-off | scheduler, schedule, on, read_env | Binding work to a scheduler and reading the execution environment. |
| Joining and lifecycle control | when_all, ensure_started, into_variant | Combining multiple operations, eager start, and adapting result shapes for composition. |
| Stop/error adaptation | stopped_as_error, stopped_as_optional | Converting stopped completion into an error or optional-style result model. |
# Today Vs. Emerging Surface
| Use case | Established route today | Emerging / broader route |
|---|---|---|
| Parallelizing a standard algorithm call | execution policies on algorithm overloads | Sender/scheduler composition does not replace the algorithm-policy entry point directly. |
| Representing asynchronous work as a composable object | future, async | senders, adaptors, schedulers |
| Expressing scheduler-aware sequencing and joining | Ad hoc thread/future orchestration | on, when_all, ensure_started |
| Controlling low-level memory ordering or lock-free behavior | atomic operations | Execution does not replace atomics; it composes work above that layer. |
# Scope Notes
| Boundary | Go here | Why |
|---|---|---|
| You are choosing an algorithm, not an execution abstraction | Algorithms | The algorithms hub owns search/sort/transform intent; execution only changes how that work is run or composed. |
| You need threads, locks, latches, futures, or stop tokens directly | Thread support | The thread library remains the canonical home for concrete thread and synchronization primitives. |
| You need memory ordering, atomic wait/notify, or lock-free coordination | Atomic operations | The atomic hub is the low-level synchronization route below execution composition. |
| You are reasoning about errors and exception categories rather than execution topology | Diagnostics | Execution adaptors can reshape error flow, but the error taxonomy lives elsewhere. |
# Practical Routes
I need a parallel algorithm policy
Start here when you already know the algorithm and only need to choose whether it runs sequentially, in parallel, or vectorized.
I need a sender pipeline
Use the sender route when work is represented as a composable execution object with value, error, and stopped completion channels.
I need the older future-based async model
Start from the thread/future route when your codebase is organized around `async`, `promise`, and `future` rather than sender composition.