Autocorrelated Optimize-via-Estimate (A-OVE) Model
- A-OVE is a stochastic optimization framework that directly integrates autocorrelated uncertainties using sufficient statistics to minimize out-of-sample cost.
- It employs recursive estimation methods, such as the innovation algorithm in VARMA models, to derive optimal decision rules for applications like portfolio optimization.
- The model outperforms traditional plug-in methods by incorporating parameter uncertainty directly, with demonstrated benefits in forecasting, reinforcement learning, and market impact modeling.
The Autocorrelated Optimize-via-Estimate (A-OVE) model is a data-driven stochastic optimization framework designed to directly incorporate and optimally handle autocorrelated uncertainties. Unlike "estimate-then-optimize" policies that first fit predictive models and subsequently optimize using point or distributional forecasts, A-OVE links the autocorrelation structure of observed data to the decision rule at the optimization stage. The framework is broadly applicable, with rigorous developments in time-series forecasting (Sun et al., 2021), reinforcement learning with temporally correlated controls (Szulc et al., 2020), market microstructure (Donier, 2012), and, most recently, finite-sample optimal stochastic programming under vector autoregressive and moving average (VARMA) processes (Wang et al., 2 Feb 2026). The central principle is out-of-sample optimality: A-OVE seeks a decision rule dependent on sufficient statistics such that, averaged over both the true uncertain parameters and new sample realizations, the resulting cost is minimized.
1. Foundations: Autocorrelated Uncertainty and Optimization
A-OVE is formulated for the setting in which the exogenous uncertainties—usually multivariate time series—exhibit autocorrelation. In the canonical stochastic program (Wang et al., 2 Feb 2026), observations follow a VARMA model,
with unknown parameters . The optimization goal is to select a decision (e.g., a portfolio allocation) from feasible set to minimize the expected cost under the unknown dynamics. The A-OVE principle dictates not to simply plug in parameter estimates, but to find the functionally dependent on sufficient statistics of the observed sample so as to minimize the expected out-of-sample cost integrated over both parameter uncertainty and future realization: where is a prior (often a delta at the MLE or a regularized posterior).
2. Sufficient Statistics and Recursive Estimation
The essential technical step in A-OVE is representing the likelihood function for the autocorrelated process in terms of minimal sufficient statistics, enabling tractable expectation and optimization. For VARMA, the joint sample likelihood factorizes as
where depends only on a set of sufficient statistics . For VARMA(1,1), for instance,
Computation of innovations and their conditional variances proceeds via a recursive "innovation algorithm," ensuring that (and thus the optimal decision) can be updated as new data arrive. These sufficient statistics enable the integral over parameter uncertainty to be performed efficiently.
3. A-OVE Decision Rule: Functional Form and Portfolio Example
The A-OVE approach yields a decision rule that is a function of the computed sufficient statistics. In portfolio optimization with trading cost,
where depends non-linearly on the anticipated . Optimal A-OVE decision
with
and are integrals over posterior- or pseudo-posterior-weighted parameter values. Notably, the A-OVE rule differs from plug-in ("predict-then-optimize") and MLE-then-expectation approaches by directly integrating the non-linear effect of parameter uncertainty shaped by autocorrelations (Wang et al., 2 Feb 2026).
4. Connections to Other Domains: Forecasting, RL, and Market Microstructure
A-OVE or its core idea—using autocorrelation-aware sufficient statistics for downstream optimization—recurs in several advanced data-driven disciplines:
- Neural Network Forecasting: Jointly estimating AR(1) coefficients and NN parameters, A-OVE reformulates standard MSE minimization into an "innovation MSE" incorporating autocorrelated residuals. The loss becomes
and SGD is run on both network parameters and the autocorrelation coefficient (Sun et al., 2021). Empirical critical values for residual AR(1) correlation establish when this joint approach significantly outperforms standard MLE.
- Reinforcement Learning: In continuous control, policies with autocorrelated (AR(1)) exploration noise are optimized via a trajectory-level estimator that accounts for the full temporal correlation in actions, resulting in smoother and more stable learning than with i.i.d. noise. Practical pseudocode and gradient estimators explicitly propagate autocorrelation parameters through the actor-critic updates (Szulc et al., 2020).
- Market Impact Models: The A-OVE paradigm is implicit in optimal trade execution models, where order flow autocorrelation () directly determines the shape and decay of price impact functions, which in turn determine the solution to risk-adjusted execution schedules (Donier, 2012).
5. Comparative Performance and Optimality
Empirically, A-OVE achieves superior out-of-sample performance (low regret with respect to a perfect-information oracle) in portfolio optimization with autocorrelated returns (Wang et al., 2 Feb 2026). Table 1 from the cited work illustrates that, across multiple dimensions, A-OVE's regret is consistently lower—by as much as an order of magnitude—than methods that use point forecasts or predicted distributions:
| Method | Time (s), | Predictive MSE | Rel. Regret (%) |
|---|---|---|---|
| A-OVE | 3.18 | -- | 0.18 |
| ETO | 4.45 | 21.21 | 0.31 |
| PTO-RNN | 0.27 | 21.32 | 27.00 |
| PTO-LSTM | 0.31 | 22.36 | 18.23 |
| FPtP-RF | 0.44 | 27.90 | 14.58 |
A critical insight is that lower predictive losses (MSE) do not necessarily translate to better decisions; A-OVE typically attains lower regret even when outperformed in forecasting accuracy, due to its direct optimization of the downstream objective (Wang et al., 2 Feb 2026).
6. Robustness, Model Misspecification, and Extensions
A-OVE exhibits robust performance even under model misspecification. Experiments with misaligned VARMA orders in the generating process and fitted model show that A-OVE maintains near-best regret, while traditional methods degrade more sharply (Wang et al., 2 Feb 2026). This suggests that finite-sample decision optimality, as realized by A-OVE, can outweigh parametric misspecification.
A-OVE’s general structure admits natural extensions to more complex autocorrelation structures (e.g., higher-order AR, vector-valued moving-average, non-Gaussian innovations) and to domains including market microstructure (Donier, 2012), neural forecasting (Sun et al., 2021), and exploration-noise-damped RL (Szulc et al., 2020). The central methodological anchor is always functional dependence of the decision rule on sufficient statistics that encode empirical autocorrelation structure, fused with direct minimization of out-of-sample expected loss or regret.
7. Practical Implementation and Limitations
Implementation of A-OVE typically requires recursive computation of sufficient statistics (innovations), integral equations over parameter posteriors, and custom optimization routines aware of the target problem’s cost curvature and non-linearity. Computational overhead may arise, particularly in high-dimensional or non-linear settings, but tractable closed-form or recursively computable statistics are possible for many VARMA and AR(1) cases. Empirical guidelines are established in the context of neural forecasting for when autocorrelation is severe enough to warrant explicit correction (e.g., residual AR(1) at the 95% level) (Sun et al., 2021).
A-OVE does not guarantee superior predictive MSE, nor does it supplant strong domain priors; its advantage is in finite-sample decision optimality with respect to the true, autocorrelated uncertainty structure. In some settings, particularly those with high-model uncertainty or severe specification errors, performance may be sensitive to the quality of the sufficient statistics and adequacy of the inferred autocorrelation model.
Key references: (Donier, 2012, Sun et al., 2021, Szulc et al., 2020, Wang et al., 2 Feb 2026).