Papers
Topics
Authors
Recent
Search
2000 character limit reached

Execution-Aware Policy Optimization

Updated 24 January 2026
  • Execution-Aware Policy Optimization is a framework that explicitly models execution delays, costs, and stochasticity to improve robust decision-making in real-world systems.
  • It integrates delay-aware MDPs, cost-augmented rewards, and policy-dependent downstream evaluation to address non-ideal actuation, transaction costs, and latency.
  • Empirical studies in robotics, trading, and combinatorial tasks demonstrate that these methods markedly outperform traditional approaches in environments with execution friction.

Execution-aware policy optimization refers to a family of techniques that explicitly model and optimize the effects of delayed, stochastic, or costly execution in sequential decision problems, with the goal of training policies that achieve high real-world performance under non-ideal actuation, delayed control, transaction costs, or other forms of execution friction. Unlike classical settings that assume instantaneous, cost-free action, execution-aware approaches integrate execution dynamics, cost structures, or delays directly into the policy learning and evaluation pipeline, yielding policies that are robust and applicable under operational constraints as found in robotics, algorithmic trading, and combinatorial optimization.

1. Formal Definitions and Problem Settings

Execution-aware policy optimization methods are motivated by the divergence between simulated (idealized) environments and real systems where decisions may be delayed, incurred at a nontrivial cost, or only partially realized. These approaches extend standard MDP or control frameworks along several axes:

  • Delay-aware MDPs: The canonical state–action–reward tuple is augmented with a latency or delay model, creating a stochastic or deterministic mapping from action decision to action execution. For example, in robotics, inference lag δ>0 implies that an action chosen from sts_t is executed at st+δs_{t+δ}, which must be accounted for in both behavior cloning and reinforcement learning (Liao et al., 8 Dec 2025, Valensi et al., 2024).
  • Execution cost-augmented reward: In quantitative finance and portfolio management, policies must optimize after-cost returns, where the reward at step tt explicitly penalizes transaction costs that may be proportional, impact-based, or regime-dependent. Execution-aware RL frameworks such as FR-LUX encode these execution costs directly in the reward function and policy optimization objectives (Zhang, 3 Oct 2025, Zhou et al., 2024, Fang et al., 2021).
  • Policy-dependent downstream optimization: In combinatorial settings, the effect of a policy is measured not as the average of individual causal interactions, but as the optimal value of a subsequent problem (e.g., linear assignment, network flow) induced by the policy’s intervention, leading to the concept of policy-dependent optimization response (Guo et al., 2022).
  • Autonomous system view: Treating the policy as part of the system kernel, where the resulting system is itself an autonomous Markov chain, enables unified optimization over policy and execution parameters (2506.08340).

These formalizations all share the principle that the operational environment and execution process are part of the optimization problem, not just passive output channels.

2. Methodological Frameworks

Execution-aware policy optimization encompasses a diverse suite of algorithms tailored to specific operational challenges:

A. Delay-Aware Learning (Deterministic and Stochastic Delays):

  • Trajectory compression: For known delays, expert demonstration trajectories are compressed to account for lag, reducing horizon length such that an imitation policy anticipates execution (Liao et al., 8 Dec 2025).
  • Policy conditioning: Inclusion of measured or estimated delay δ\delta as a policy input, achieved by state-delay concatenation or embedding, enables adaptation to varying latency at inference (Liao et al., 8 Dec 2025).
  • Model-based tree search with action queues: For stochastic delays, the agent maintains both action and delay queues, and effective actions are dynamically mapped to their execution time. DEZ leverages a learned world model to “hallucinate” the future state past the delay horizon, integrating with Monte Carlo tree search (Valensi et al., 2024).
  • Markovian sufficiency: For delays observable at decision time, it is provably sufficient to optimize over Markov policies conditioned on the state and delay queue rather than the full history (Valensi et al., 2024).

B. Execution-Cost-Aware RL and Control:

  • Microstructure-consistent reward: Directly model per-step execution costs as convex functions (sum of proportional Δw1||\Delta w||_1 and quadratic impact terms) embedded in the RL reward (Zhang, 3 Oct 2025). Cost parameters are calibrated from market proxies for regime dependence.
  • Trust region in trade/actuation space: Instead of policy logits, constrain the norm of action changes (trade or control adjustments) to limit high-cost swings, leading to stable and realistic policy updates (Zhang, 3 Oct 2025).
  • Regime-conditioned policy specialization: Policies are conditioned on exogenous states (e.g., volatility–liquidity regimes) captured by discrete labels, enabling specialization and robustness across operational regimes without data fragmentation (Zhang, 3 Oct 2025).

C. Policy Distillation with Oracle Teachers:

  • Privileged-policy distillation: Optimal execution strategy is first learned under perfect information via an oracle. A deployable student policy then minimizes a distillation loss (cross-entropy to the oracle action), along with standard RL or actor-critic objectives (Fang et al., 2021).
  • Partial observability bridging: The distillation paradigm facilitates learning deployment-robust policies when direct access to privileged simulator state is not possible at runtime (Fang et al., 2021).

D. Continuous-Time and Analytical Approaches:

  • Martingale-based policy evaluation: For continuous-time execution (e.g., volume-weighted average price tracking under impact), RL value functions are characterized by martingale conditions; actor-critic updates are derived via Feynman-Kac representations (Zhou et al., 2024).
  • Adaptive dynamic programming (ADP): When the environment is well-modelled, HJB-based actor-critic with analytic gradients allows rapid convergence (Zhou et al., 2024).

E. Off-Policy Evaluation with Downstream Optimization:

  • Bias-corrected downstream evaluation: When execution-aware utility is a (random) optimal value of a downstream LP/flow problem, plug-in policy evaluation is biased due to minimization inside expectation. Analytical bias corrections (perturbation-based estimators) are required to enable unbiased policy search (Guo et al., 2022).

F. Dynamical System Optimization (DSO):

  • Unified cost-functional optimization: Views all execution parameters, policies, and system components as arguments of a global DSO cost. Provides algorithms for first- and second-order policy gradients, Hessians, Fisher metrics, and off-policy Z-learning (2506.08340).

3. Architectural and Algorithmic Details

Execution-aware frameworks employ architectures and algorithms with explicit execution modeling:

Method Key Modeling Element Core Algorithmic Feature
DA-DP (Liao et al., 8 Dec 2025) State-delay concatenation, compressed BC data DDPM-style diffusion; δ-conditioned U-Net
DEZ (Valensi et al., 2024) Delayed action/effective queue, latent state rollout MCTS with delay-aware rollouts, latent-world model
FR-LUX (Zhang, 3 Oct 2025) Cost-aware reward, regime conditioning PPO with trade-space trust region, regime MoE
Oracle distillation (Fang et al., 2021) Teacher with privileged info; student on realistic obs PPO with distillation loss
VWAP Continuous RL (Zhou et al., 2024) SDEs with impact/delay, entropy regularization Martingale actor-critic; HJB-ADP
Off-policy eval (Guo et al., 2022) Policy-dependent LP value, plug-in bias Perturbation estimator, subgradient descent
DSO (2506.08340) Global cost on (policy, exec., system) params DSO-gradient, Hessian, prox methods

Architectural innovations include δ- or regime-embedding into state representations, exploring both shared-trunk and mixture-of-experts parameterizations for regime-aware specialization, and leveraging latent-space planning for tree-search algorithms under execution uncertainty.

4. Theoretical Guarantees and Convergence Properties

Execution-aware policy optimization methods are supported by a series of theoretical results:

  • Delay-optimal Markov property: For stochastic delay processes, any history-dependent policy can be replaced by an execution-aware Markov policy without loss of optimality (Valensi et al., 2024).
  • Policy improvement under trust region: Monotonic improvement can be guaranteed if the updated policy remains close to the previous one in KL-divergence and trade/action-change norm (Zhang, 3 Oct 2025).
  • Turnover and inaction-band bounds: Explicit turnover bounds as a function of transaction cost penalty and reward span, and the existence of "dead zones" (inaction bands) where optimal behavior is to do nothing if the expected improvement is below the transaction threshold (Zhang, 3 Oct 2025).
  • Convergence of RL and ADP: In continuous-time execution with regularity on drift and reward, martingale and orthogonality policies converge to optimal controls; ADP achieves rapid convergence under accurate parameterization (Zhou et al., 2024).
  • Bias-corrected estimators: Unbiased evaluation (and consequentially, optimization) of policy-dependent downstream objectives is achieved through a perturbation estimator; convergence rates are O(n1/2+K1/2)O(n^{-1/2} + K^{-1/2}) for nn samples and KK optimization steps (Guo et al., 2022).
  • Unified DSO stationarity: DSO recovers all traditional RL, cloning, identification gradients under a single cost and recursion, with natural gradient, Hessian, and surrogate-based policy iteration framed uniformly (2506.08340).

5. Empirical Findings and Benchmarks

Extensive experiments across robotic manipulation, Atari reinforcement learning, algorithmic trading, and portfolio management validate execution-aware methods:

  • Robotics/DA-DP: Substantial robustness to delay across various tasks (e.g., rolling-ball, ping-pong, humanoid pick-and-place). Success rates for DA-DP remain $0.96$ at δ=0.05\delta=0.05s (vs $0.20$ for unawareness) and $0.72$ at δ=0.10\delta=0.10s (vs $0.01$), always outperforming delay-unaware baselines (Liao et al., 8 Dec 2025).
  • Tree-search RL/DEZ: On 15 Atari domains, DEZ achieves top scores in 39/45 constant-delay and 42/45 stochastic-delay regimes, showing marked robustness over delay-oblivious and even constant-delay-specialized baselines (Valensi et al., 2024).
  • Finance/FR-LUX: In a 4×54 \times 5 regime–cost grid, FR-LUX achieves the highest after-cost Sharpe ratio (with tight CIs), flattest cost-performance slope, and significant per-scenario improvements after multiple-testing correction (Zhang, 3 Oct 2025).
  • Order execution/Oracle distillation: Oracle Policy Distillation improves price advantage by nearly +1+1bp over ablated and RL baselines, with superior learning speed (Fang et al., 2021).
  • VWAP continuous RL: Martingale-orthogonality actor-critics converge to the analytic optimum with measured final MSE 0\to 0; all model-free variants outperform naive TWAP in high-impact environments (Zhou et al., 2024).
  • Off-policy optimization: Perturbation-based estimators deliver unbiased policy search for downstream LPs; overall regret matches theoretical convergence guarantees (Guo et al., 2022).

6. Broader Insights, Design Guidelines, and Applicability

A number of robust design heuristics and generalizations emerge:

  • Explicitly measure and model operational delays (δ) or execution variables.
  • Condition policy inputs on these variables, whether they be delay scalars, cost regimes, or exogenous diagnostics.
  • Compress or shift data (trajectories, reward accounting) in accordance with the real-world delay/lag or cost model.
  • Use bias-corrected estimators for any scenario where policy-dependent downstream optimizations are present.
  • Embed domain structure—whether regime, delay, or cost geometry—directly into architecture and objective, eschewing naive averaging or unstructured learning.
  • Extend patterns beyond specific models (e.g., DA-DP’s δ-augmentation applies to transformers/autoregressive policies, not only diffusion).
  • Evaluate policies as a function of operational variables (delay, cost), plotting robustness curves instead of relying on point metrics.

Execution-aware policy optimization is broadly applicable to any domain in which the realized policy utility is mediated by delayed, costly, or constrained execution, and where adapting to this reality is essential for real-world deployability and reliability.

7. Unifying Conceptual and Algorithmic Perspectives

The execution-aware paradigm can be abstracted within the Dynamical System Optimization (DSO) framework (2506.08340). Here, all parameters—of policy, system, sensors, or execution kernel—are treated as a single argument of a cost functional L(x,θ)L(x, \theta), and learning consists of optimizing the resulting autonomous dynamical system. This viewpoint allows transfer of all gradient-based, natural-gradient, and surrogate-based optimization techniques developed for RL to a superset of applications including behavioral cloning, control with system identification, state estimator learning, and mechanism design, further reinforcing execution-awareness as a pillar of modern sequential decision optimization.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Execution-Aware Policy Optimization.