Papers
Topics
Authors
Recent
2000 character limit reached

Active Sampling Framework for ACOPF

Updated 16 November 2025
  • The paper demonstrates that active sampling frameworks dramatically improve the diversity and representativeness of ACOPF feasible set explorations.
  • It employs convex relaxations, bucketized learning, and gradient-based acquisition functions to target critical operating conditions efficiently.
  • Empirical results reveal order-of-magnitude improvements in constraint-set discovery and scenario design compared to conventional sampling methods.

An active sampling framework for AC Optimal Power Flow (ACOPF) refers to data generation and selection strategies that guide sampling toward the most informative, rare, or structurally diverse ACOPF problem instances, dramatically improving the efficiency and generalizability of models trained on such data. These frameworks are essential for both data-driven surrogate modeling (“optimization proxies”) and stochastic OPF under uncertainty, where indiscriminate or uniform sampling fails to adequately explore the complex ACOPF feasible set or operational regime transitions. Recent advances systematically integrate convex relaxations, active constraint set analysis, bucketization schemes, gradient-based acquisition functions, and sparse regression–based adversarial design, resulting in frameworks that efficiently generate datasets or scenario sets with superior representativeness and downstream model performance.

1. Foundational Problem Structure and Motivation

The AC Optimal Power Flow (OPF) problem seeks dispatch variables (complex bus voltages {Vi}\{V_i\}, generator setpoints {Sig=pig+jqig}\{S^g_i = p^g_i + j q^g_i\}) that minimize operational cost subject to nonlinear AC power flow equations and engineering/security constraints: minSg,V,Sf,StiGcipig\min_{S^g, V, S^f, S^t} \sum_{i\in\mathcal{G}} c_i p^g_i subject to: \begin{align*} &\text{(a) Kirchhoff's laws (power balance):} \ &\qquad \sum_{j \in \mathcal{G}i} Sg_j - \sum{j \in \mathcal{L}i} Sd_j = \sum{e \in \mathcal{E}if} Sf_e + \sum{e \in \mathcal{E}_it} St_e, \quad \forall i \in \mathcal{N} \ &\text{(b) Ohm's Law, branch flow limits, angle limits, generator dispatch, and voltage bounds} \end{align*} The feasible set is nonconvex, high-dimensional, and sharply partitioned by transitions in active constraint sets.

Operational realities such as uncertainty, changing system topologies, or market regime shifts require ML-based optimization proxies and robust algorithms to generalize over this complexity, necessitating representative datasets. Active sampling frameworks address the inefficiency of uniform, random, or naive “box” sampling by preferentially allocating computation toward difficult, rare, or structurally novel regions in the input space, and by using optimization-specific feedback or relaxations to iteratively refine the sampling domain (Joswig-Jones et al., 2021, Li et al., 9 Nov 2025, Mezghani et al., 2019, Klamkin et al., 2022).

2. Convex-Superset and Infeasibility Aware Sampling

A class of frameworks typified by OPF-Learn (Joswig-Jones et al., 2021) begins with the construction of a convex superset S0S_0 that rigorously contains the unknown ACOPF feasible region. For a system with L|L| load buses and real/reactive loads xR2Lx_\ell \in \mathbb{R}^{2|L|}, S0S_0 is defined by linear constraints derived via convex relaxation (e.g., Quadratic Convex (QC), Second-Order Cone (SOC)):

S0={xA0xb0}S_0 = \{x_\ell \mid A_0 x_\ell \leq b_0\}

where bounds include per-bus and global generation, and power factor constraints. An initial interior point (e.g., Chebyshev center) seeds Hit-and-Run Monte Carlo uniform sampling from S0S_0.

At each sampled dkd_k:

  • If ACOPF(dkd_k) is feasible, label and record.
  • If infeasible, compute the nearest feasible point xx^*_\ell for the relaxed OPF via

x=argminx,V,Sgxx^2 s.t. (V,Sg)Ω~(x)x^*_\ell = \arg\min_{x, V, S_g} \|x - \hat{x}_\ell\|_2\ \text{s.t.}\ (V, S_g) \in \widetilde{\Omega}(x)

The separating hyperplane (xx^)Tx(xx^)Tx(x^*_\ell-\hat{x}_\ell)^T x_\ell \leq (x^*_\ell-\hat{x}_\ell)^T x^*_\ell is appended to the constraints, shrinking SkS_k to Sk+1S_{k+1}.

This process continues iteratively, actively “carving” the convex superset, with each infeasible realization producing a certificate that discards unviable domain region. This approach guarantees uniform exploration over the feasible manifold while excluding spurious or wasted samples in infeasible subregions. Table 1 in (Joswig-Jones et al., 2021) demonstrates orders-of-magnitude improvements in the diversity of discovered active constraint sets compared to traditional ±20% box sampling.

3. Bucketized and Constraint-Informed Active Learning

Alternative frameworks deploy clustering or “bucketization” of the input domain, assigning region-level priority based on informative acquisition functions (Klamkin et al., 2022, Li et al., 9 Nov 2025). In these methods, the input space is partitioned into kk buckets (e.g., via k-means++ on load vectors). Each bucket is scored with an acquisition function such as the Input-Loss-Gradient (IG):

sb=1b(Dv)(xj,yj)b(Dv)x(h^(xj),yj)2s_b = \frac{1}{|b(\mathcal{D}_v)|} \sum_{(x_j, y_j) \in b(\mathcal{D}_v)} \|\nabla_x\,\ell(\hat{h}(x_j), y_j)\|_2

where Dv\mathcal{D}_v is a bucketized validation set; the score quantifies model sensitivity to small perturbations, thus local informativeness. Sampling budget is distributed in proportion to sbs_b, with candidate samples drawn from buckets with the largest IG and highest validation losses, targeting regions where the model is least robust.

Recent advances (Li et al., 9 Nov 2025) supplement this scheme with an active-set predictor network, which outputs Bernoulli logits for each inequality constraint, enabling selection of new labeled samples whose predicted active-constraint patterns increase the combinatorial coverage of the proxy dataset. This constraint-informed filtering mitigates over-sampling of redundant or structurally similar instances and accelerates proxy convergence, especially for rare active set transitions.

4. Scenario Design and Violation-Driven Sparse Regression

In the context of stochastic ACOPF, data-driven scenario design leverages constraints violation statistics to direct sampling (Mezghani et al., 2019). Rather than enforcing constraints over a massive set of i.i.d. scenarios (which is computationally prohibitive), the framework operates as follows:

  1. Iteratively solve the scenario-based ACOPF over a small evolving scenario set.
  2. Validate the solution over a large Monte Carlo pool; select scenarios with the largest constraint violations, maximal violation magnitude (MV), or number of violated constraints (NC).
  3. For each selected scenario and violated constraint, fit a sparse linear (Lasso) model

dc=argmindtSc(utcdTΔxt)2+λd1d_c = \arg\min_{d} \sum_{t \in S_c} (u^c_t - d^T \Delta x_t)^2 + \lambda \|d\|_1

The direction dcd_c identifies “critical buses” responsible for violations; new scenarios are created by perturbing uncertainty along these sparse directions to extremal bounds, adversarially approaching the feasibility boundary. The scenario set is subsequently updated and the process iterates.

This approach delivers order-of-magnitude savings in the number of required scenarios to achieve probabilistic constraint satisfaction, as statistically validated by Hoeffding bounds and empirical results.

5. Empirical Results and Effectiveness

Empirical validation demonstrates the statistical and structural superiority of active sampling frameworks over uniform or random methods, for both proxy learning and stochastic OPF.

  • OPF-Learn (Joswig-Jones et al., 2021): In 10,000 feasible samples on PGLib-OPF cases, OPF-Learn discovers nearly all possible unique active-constraint sets (e.g., 9931/10,000 for case30), compared to only a handful under box sampling (7 for case30). ML models trained on OPF-Learn data generalize over broad operating regimes, whereas those trained on typical box-sampled data exhibit orders-of-magnitude higher test error when evaluated over diverse conditions.
  • Bucketized and constraint-informed active learning (Klamkin et al., 2022, Li et al., 9 Nov 2025): On large PGLib test cases, Bucketized Active Sampling (BAS-IG) and constraint-informed extensions reach target proxy accuracy (mean and tail L1L_1 loss) with up to 2×2\times fewer ACOPF solves, efficiently directing sampling to mid-range and regime-changing load patterns while avoiding infeasible/high-variance regions. Inclusion of an active-set predictor accelerates convergence to robust tail performance with a smaller query budget.
  • Scenario design for stochastic OPF (Mezghani et al., 2019): Data-driven scenario design achieves <1%<1\% violation rates with $5$–$31$ iteratively constructed scenarios—compared to $50$–$200+$ for random scenario-based OPF—on four benchmarks, with at most a 2.5%2.5\% nominal cost overhead and dramatically reduced runtime (e.g., ten minutes for a 1354-bus test case vs. $1$+ hour for naive random sampling).

6. Implementation Considerations and Computational Trade-Offs

These frameworks are designed for computational efficiency in both data and model generation:

  • Convex-projection-based frameworks require repeated ACOPF solves and convex relaxations; bottlenecks are mitigated by parallel Hit-and-Run chains, warm-starts, and, where available, GPU-accelerated solvers.
  • Bucketization and gradient scoring incur negligible overhead compared to ACOPF solves, though bucketization quality depends on the representativeness of the bucket-validation set. The choice of bucket number kk balances exploration and per-bucket diversity (k30k\approx 30 attested as effective).
  • Constraint-prediction accelerators (auxiliary DNNs) obviate the need to repeatedly solve for active sets at every candidate sample.
  • Parallelization is straightforward for Monte-Carlo validation, per-constraint sparse regression fits, and data labeling ACOPF solves.
  • Scenarios design allows for the use of historical data and non-parametric uncertainty distributions, with preservation of domain realism and scenario diversity.

7. Extensions, Limitations, and Generalizations

All major frameworks immediately generalize to cover a range of practical OPF and data-driven modeling regimes. Extensions include:

  • Replacing QC with semidefinite or higher-moment relaxations to obtain tighter infeasibility certificates.
  • Adapting variable bounds to handle distributed renewables, curtailment, or negative net load situations.
  • Security-constrained OPF: Including post-contingency constraints in the convex projection or scenario pool.
  • Adaptive and nonuniform sampling: Steering sampling toward under-explored or highly complex subregions, e.g., via adaptive direction choices in Hit-and-Run.
  • Bucketization by domain knowledge (e.g., regional geography, topology).
  • Dynamic re-partitioning of buckets (e.g., k-medoids) as the distribution of high-loss samples evolves.
  • Multiscenario and online extensions for streaming or real-time adaptation.

Identified limitations include the need for representative validation sets in bucketized methods and potentially degraded performance if the initial convex superset is loose or unrepresentative. A plausible implication is that hybridizing multiple active sampling criteria (gradient, active-constraint, violation, adversarial direction) yields superior robustness when system topology, uncertainty, or operational rules are nonstationary or not well sampled by historical data.


The active sampling framework for ACOPF constitutes a rigorously validated suite of methodologies for efficient, informative, and structurally diverse data generation in both surrogate modeling and scenario-based OPF. These techniques enable reliable generalization, tight constraint satisfaction guarantees, and principled performance benchmarking for the next generation of power system optimization and ML surrogates (Joswig-Jones et al., 2021, Li et al., 9 Nov 2025, Mezghani et al., 2019, Klamkin et al., 2022).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Active Sampling Framework for ACOPF.