Coefficient of Prescriptiveness
- Coefficient of prescriptiveness is defined as the normalized improvement in cost reduction achieved by a contextual policy over a static reference relative to an optimal clairvoyant benchmark.
- It decomposes the expected cost into the minimal achievable cost and the regret, linking to classical metrics like the R² metric in predictive models.
- Its robust formulation via convex reformulations and LP-oracle methods enables effective optimization under distribution shifts while ensuring policy stability.
The coefficient of prescriptiveness, also known as the prescriptiveness competitive ratio (PCR), is a universal, unitless performance measure that quantifies the value of contextual (data-driven) decision-making relative to a static reference and an anticipative (clairvoyant) benchmark. Introduced to evaluate the practical prescriptive power of side information in stochastic optimization, the PCR expresses the proportion of the maximum attainable reduction in expected cost (regret gap) achieved by a given contextual policy. This framework has theoretical guarantees on admissible values, direct ties to classical statistical metrics, and supports robust optimization formulations that address model uncertainty and data distribution shifts (Poursoltani et al., 2023).
1. Formal Definition and Properties
Let be a cost function where is a feasible decision and is a random vector with distribution . Decisions are selected according to a policy , with representing observed side information. The clairvoyant (fully anticipative) solution is
and a static reference decision (e.g., the solution ignoring ).
The prescriptiveness competitive ratio of policy relative to is
By construction, PCR is unitless and provides the fraction of the improvement available over (compared to clairvoyance) delivered by . When , it holds (Lemma 1) that for any : a value of 1 corresponds to optimality (clairvoyant policy), and 0 to parity with the static rule.
2. Interpretation and Special Cases
The expected cost decomposition for a contextual policy is
where denotes the “regret” of versus . The PCR compares the regret of the contextual rule to the regret incurred by , acting as a normalized regret measure. PCR is monotonically decreasing in the regret of and increasing in the regret of .
When the expected cost is replaced by empirical averages (empirical distribution ), the PCR reduces to the standard definition used in practice. In the case , one-dimensional, the PCR coincides with the metric for predictive models.
3. Distributionally Robust Prescriptiveness Optimization
In data-driven settings, the true may not be known and could shift from an empirical estimate . Consequently, policies seek to maximize the worst-case PCR over an ambiguity set of distributions:
The distributionally robust prescriptiveness optimization problem (DR–PCR) seeks
for some class of policies. Lemma 1 shows the optimal DR–PCR objective is in if . Lemma 2 states that when and is unrestricted, the contextual stochastic optimization (CSO) solution maximizes worst-case PCR.
4. Tractable Reformulation and Algorithmic Solution
By introducing a variable (interpreted as the worst-case PCR), Proposition 1 yields an equivalent convex reformulation:
For convex (e.g., polyhedral or CVaR-type ambiguity sets), this is a convex program.
For a popular choice of (the “nested CVaR” set), the robust objective decomposes for each -scenario into LPs parameterized by , and overall feasibility is certified by a convex, non-decreasing function :
- For candidate , per-scenario LPs are solved.
- Bisection is used to find the maximal such that .
- Algorithmic complexity is bisection iterations, each requiring LP solves; the procedure is polynomial-time for polyhedral and linear .
5. Illustrative Application: Contextual Shortest Path
As a case study, the contextual shortest path problem involves a network with nodes and arcs; arc costs are uncertain. Decisions route flow from origin to destination to minimize . Side information is correlated with via a multivariate normal. A random-forest model trained on samples generates empirical conditional distributions , yielding tree-leaf–weighted scenarios for any new .
Four methods are compared, with parameter validated against :
- CSO (classical contextual SAA, )
- DRCSO (nested-CVaR robustification of expected-cost)
- DRCRO (nested-CVaR robustification of “ex-post regret”)
- DR–PCR (distributionally robust maximization of worst-case PCR, )
A distribution shift test constructs an out-of-sample test set () with increasing perturbation in the mean of (). The metric evaluated is out-of-sample . The results (cf. Figure 1) show that for , all methods yield similar PCR , but for larger , PCR for CSO/DRCSO/DRCRO quickly falls below zero while DR–PCR maintains up to , indicating notable robustness to distribution shift.
| Method | PCR when | PCR as 60% |
|---|---|---|
| CSO | Negative | |
| DRCSO | Negative | |
| DRCRO | Negative | |
| DR–PCR |
6. Theoretical and Practical Insights
The worst-case PCR objective serves as a regularizer, biasing contextual policies toward the static reference in cases of unreliable side information , and penalizing over-reaction in scenarios of distribution shift. The trade-off between conservatism and robustness is controlled by the choice of ambiguity set (e.g., nested CVaR, Wasserstein, moment-based). Calibration of should be performed on a hold-out set mimicking anticipated distribution shifts.
The bisection plus LP-oracle framework generalizes to any setting where the subproblem is a tractable convex program. Restricting , such as to affine policies, reduces the dimensionality of the underlying optimization. The PCR is a standard-comparable, interpretable metric on ; a realized value certifies that at least fraction of the maximal gain over the static policy is achieved even under the worst-case distributional stress within .
7. Significance and Interpretations
The coefficient of prescriptiveness provides a rigorous, application-invariant measure for quantifying the utility of contextualization in decision-making under uncertainty. Its robust formulation directly addresses the challenges of estimation error and distributional shift, which are central to the deployment of data-driven optimization policies. The ability to guarantee a quantifiable proportion of clairvoyance-improvable cost under worst-case conditions supports both theoretical analysis and practical decision support (Poursoltani et al., 2023).