Execution library

Curated hub for C++ execution: parallel algorithm policies today, sender/scheduler execution in C++26, and boundaries to algorithms, threads, and futures.

The C++ execution area currently spans two distinct layers. The established C++17 layer is the execution-policy model used by parallel algorithms. The newer C++26 layer is the sender/scheduler execution library, which introduces composable execution objects such as just, then, on, and when_all. This hub makes that split explicit so you can choose the right surface first.

Use this page when your question is about how work is scheduled or composed, not about the algorithm operation itself. Keep algorithms as the destination for the actual algorithm families, thread support for thread/latch/future primitives, and atomics for low-level synchronization.

# Start Here

Use parallel algorithm execution policies

Start here when you already have a standard algorithm and need to choose between `seq`, `par`, `par_unseq`, or `unseq` execution behavior.

Create sender-based execution pipelines

Use the sender route when the problem is composing asynchronous work through values, completion channels, and schedulers rather than choosing an algorithm policy.

Work with schedulers and run loops

Choose this route when execution resources and scheduler hand-off are the main abstraction boundary.

Transform or branch completion flows

Use the sender-adaptor route when you need value transformation, error handling, stop handling, or environment-aware composition.

Combine multiple asynchronous operations

Start here when your execution problem involves fan-out, joining results, eager start, or mapping stop into optional/error channels.

Use older async/future-based execution

If your codebase still models async work around futures and `std::async`, start from the thread support side and treat execution senders as a newer adjacent model.

# Execution Model Map

If you need to...Start withWhyCommon adjacent route
Run an existing standard algorithm under a policyexecution policiesThe policy model modifies how algorithms execute; it does not define a general execution graph abstraction.algorithms
Construct an execution pipeline from values and continuationsjust, then, let_valueThe sender model is about composable execution objects and completion channels.future
Bind work to a scheduler or execution resourcescheduler and onThis is the route for scheduler-aware composition rather than raw thread ownership.thread support
Combine multiple asynchronous operations into one resultwhen_all`when_all` is the main joining/composition surface in the sender-based model.async
Map stop or error completion into a different output modelstopped_as_error, stopped_as_optional, upon_errorThese adaptors shape completion behavior rather than doing the underlying work themselves.diagnostics
Use the older result-carrying async modelasync and futureThat model remains important, but it belongs primarily to thread support rather than the newer execution library.thread support

# Execution Families

FamilyCore destinationsUse it for
Execution policiespolicy tags, is_execution_policy, <execution>Choosing sequential, parallel, vectorized, or combined execution modes for standard algorithms.
Sender factories and completion seedsjust, just_error, just_stoppedCreating sender-based execution values that represent normal, error, or stopped completion.
Sender adaptors and transformationthen, let_value, let_error, let_stopped, upon_error, upon_stoppedTransforming values, errors, and stopped completion into the next stage of a pipeline.
Scheduling and resource hand-offscheduler, schedule, on, read_envBinding work to a scheduler and reading the execution environment.
Joining and lifecycle controlwhen_all, ensure_started, into_variantCombining multiple operations, eager start, and adapting result shapes for composition.
Stop/error adaptationstopped_as_error, stopped_as_optionalConverting stopped completion into an error or optional-style result model.

# Today Vs. Emerging Surface

Use caseEstablished route todayEmerging / broader route
Parallelizing a standard algorithm callexecution policies on algorithm overloadsSender/scheduler composition does not replace the algorithm-policy entry point directly.
Representing asynchronous work as a composable objectfuture, asyncsenders, adaptors, schedulers
Expressing scheduler-aware sequencing and joiningAd hoc thread/future orchestrationon, when_all, ensure_started
Controlling low-level memory ordering or lock-free behavioratomic operationsExecution does not replace atomics; it composes work above that layer.

# Scope Notes

BoundaryGo hereWhy
You are choosing an algorithm, not an execution abstractionAlgorithmsThe algorithms hub owns search/sort/transform intent; execution only changes how that work is run or composed.
You need threads, locks, latches, futures, or stop tokens directlyThread supportThe thread library remains the canonical home for concrete thread and synchronization primitives.
You need memory ordering, atomic wait/notify, or lock-free coordinationAtomic operationsThe atomic hub is the low-level synchronization route below execution composition.
You are reasoning about errors and exception categories rather than execution topologyDiagnosticsExecution adaptors can reshape error flow, but the error taxonomy lives elsewhere.

# Practical Routes

I need a parallel algorithm policy

Start here when you already know the algorithm and only need to choose whether it runs sequentially, in parallel, or vectorized.

I need a sender pipeline

Use the sender route when work is represented as a composable execution object with value, error, and stopped completion channels.

I need the older future-based async model

Start from the thread/future route when your codebase is organized around `async`, `promise`, and `future` rather than sender composition.