You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current specification of let_value requires implementations to keep the operation-state from the predecessor input sender alive until the let_value operation-state is destroyed.
This means that implementations cannot reuse the storage from the predecessor operation-state for the operation-state of the successor operation, increasing the overall storage size required for the let_value operation-state.
Note that the current design is a deviation from what was implemented in libunifex, which does reuse the storage and does destroy the predecessor operation-state and construct the successor operation-state at the same location. The reference implementation in stdexec follows what has been specified.
Is this design/behaviour intended? Or is it just a side-effect of implementing it in terms of basic-sender, which does not support manual-management of lifetimes of child operation states?
The semantics of when the child operation state is destroyed are observable by the user and so we should be clear about what is intended here.
One option to consider for addressing this question would be to keep the existing semantics for let_value and introduce a new algorithm statement() (see #239 for details) that ensures destruction of the child operation-state before the operation completes (copying the result to the stack if necessary).
This would let you write the equivalent to the above:
co_awaitf(co_await src)
as
let_value(src, f)
and write the equivalent of:
auto x = co_await src;
co_returnco_awaitf(x);
as
statement(let_value(statement(src), f))
This second option would give the observable semantic of resources held by the operation-state of src guaranteed to be destroyed before f was invoked and the successor operation started.
The default behaviour of let_value used in this way with the statement algorithm would not necessarily be optimised to reuse the storage from the first operation-state for the second operation-state. It would also by-default require moving the src result twice - the statement algorithm would move src onto the stack, and then let_value would move the result into its operation-state.
However, we could possibly specify the semantics in such a way to allow a QoI optimization where the implementer specializes the let_value algorithm for the case where the src sender is a sender returned from the statement() algorithm. In this way, the let_value algorithm could implement the statement() behaviour of copying the result before destroying the child operation state itself - moving the result directly into the let-value operation-state rather than first onto the stack. Then, as let-value knows it can destroy the first child operation-state after copying the value, it can then reuse the storage for the second operation-state.
So, rather than have a let_value() and a new let_value_statement() algorithm with the early-destruction-of-operation-state semantics, we can just have let_value(statement(x), f) be the way to express this.
The current specification of
let_value
requires implementations to keep the operation-state from the predecessor input sender alive until thelet_value
operation-state is destroyed.This means that implementations cannot reuse the storage from the predecessor operation-state for the operation-state of the successor operation, increasing the overall storage size required for the
let_value
operation-state.This is related to #239 and #70.
Note that the current design is a deviation from what was implemented in libunifex, which does reuse the storage and does destroy the predecessor operation-state and construct the successor operation-state at the same location. The reference implementation in stdexec follows what has been specified.
Is this design/behaviour intended? Or is it just a side-effect of implementing it in terms of basic-sender, which does not support manual-management of lifetimes of child operation states?
The semantics of when the child operation state is destroyed are observable by the user and so we should be clear about what is intended here.
The P2300R10 paper does not discuss this aspect of the design at all: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p2300r10.html#design-sender-adaptor-let
The text was updated successfully, but these errors were encountered: