Papers
Topics
Authors
Recent
2000 character limit reached

Scale-adaptive Multi-task Power Flow Analysis

Updated 11 January 2026
  • The paper introduces SaMPFA, which decouples voltage and branch flow predictions to overcome distributional shifts and slack-bus indeterminacy.
  • It employs Local Topology Slicing to generate diverse subgraph samples, ensuring robust model generalization across varying grid scales.
  • Physics-informed loss functions and multi-task graph learning enhance prediction accuracy, significantly reducing errors in branch flow and angle recovery.

Scale-adaptive Multi-task Power Flow Analysis (SaMPFA) is a framework developed to address the distributional shift and topological adaptability challenges in deep-learning-based power flow analysis, specifically for electrical grids with variable topological scales and changing bus/branch configurations. SaMPFA introduces Local Topology Slicing (LTS) for scale-diverse sampling, and employs Reference-free Multi-task Graph Learning (RMGL) to output bus and branch states directly, circumventing error-amplification from phase angle recovery and slack-bus indeterminacy. The framework is physics-informed, incorporating domain constraints into the loss, and demonstrates consistently superior generalization under both known and unseen network conditions (Li et al., 4 Jan 2026).

1. Problem Formulation and Motivating Challenges

Deep-learning-based power flow analysis (DL-PFA) aims to bypass iterative numerical solvers by learning mappings from nodal injections (bus active/reactive power, generator voltage setpoints) directly to bus voltages and branch flows. However, real-world power networks pose two principal challenges:

  • Distributional Shift: Graph neural networks (GNNs) experience significant drops in predictive accuracy when the scale or topology changes, due to shifts in graph statistics such as average degree and algebraic connectivity. This undermines generalization to larger/smaller or structurally altered grids.
  • Slack-bus Indeterminacy and Error Amplification: The phase angle reference (slack bus) is arbitrarily chosen; shifting its index uniformly alters all phase angles, rendering direct prediction of angles across scales unfeasible. Additionally, branch power computed from predicted voltages is highly error-sensitive, especially for low-impedance elements. Even minor voltage discrepancies are amplified by factors of 10³–10⁴, inducing substantial errors in branch flow estimates.

SaMPFA mitigates these issues through a dual approach: a reference-free, multi-task predictor of both bus voltages (magnitudes) and branch flows, and a data augmentation strategy (LTS) designed for cross-scale feature generalization.

2. Local Topology Slicing (LTS)

LTS is a data generation and augmentation technique that systematically extracts subgraphs of varying scale from the complete power network, supporting robust cross-scale model training. Given a grid represented as G=(B,E)\mathcal{G}=(\mathcal{B}, \mathcal{E}), LTS proceeds as follows:

  • Subgraph Extraction: Randomly select a seed bus, typically a generator. Grow the subgraph via breadth-first search (BFS) until the desired number of buses NsubN_{sub} is reached. Only include internal edges.
  • Boundary Treatment: For edges crossing outside the current subgraph, calculate the original net power transfer and replace it with equivalent P,QP, Q loads at the boundary nodes, ensuring local power balance is preserved.
  • Diversity Induction: Random perturbations to bus powers and simulated branch outages are injected to diversify operational conditions.

The procedure expands the effective distribution of training samples, directly exposing the learning architecture to a breadth of subnetwork configurations encountered in scaling studies or contingency analysis.

3. Reference-free Multi-task Graph Learning (RMGL)

RMGL is a neural architecture that produces joint predictions of bus states and branch flows for each sampled graph, eschewing explicit phase angle output. Its structure comprises:

3.1 Input Encoding

  • Bus Features: Each node receives a feature vector with elements including (Pi,Qi,Vi,Qimin,Qimax,gm,ii)(P_i, Q_i, V_i, Q_i^{min}, Q_i^{max}, g_{m,ii}).
  • One-hot Bus Types: Encodes the node as PQ, PV, slack, or virtual (padding) type.
  • Weighted Adjacency: Encodes line admittance (yL,ijy_{L,ij}) and tap ratios.

All inputs are projected to a common embedding dimension via linear transformations.

3.2 Masked Graph Transformer (MGT) Layers

  • Self-attention: Multi-head attention operates on bus embeddings, masking out padded (virtual) buses.
  • Graph Attention: GAT layers process the embedding in conjunction with physical grid connectivity.
  • Residuals and Feed-forward: Outputs are updated via residual addition and feed-forward networks, repeated for MM blocks, yielding latent embeddings XDX_D.

3.3 Multi-task Output Heads

  • Bus Output: Fully-connected layer maps XDX_D to {P~i,Q~i,V~i}\{ \widetilde{P}_i, \widetilde{Q}_i, \widetilde{V}_i \}.
  • Branch Output: For each directed pair (ij)(i\to j), features from both endpoints and their difference, concatenated with branch parameters, are mapped to {P~L,ij,Q~L,ij}\{ \widetilde{P}_{L,ij}, \widetilde{Q}_{L,ij} \}.

3.4 Angle Recovery

Phase angles are not directly predicted. Instead, given predicted voltages and branch flows, the angle difference across a branch is computed using

θij=arctanbL,ijPL,ij+gL,ijQL,ijgL,ijPL,ijbL,ijQL,ijVi2(gL,ij2+bL,ij2)\theta_{ij} = \arctan\frac{b_{L,ij} P_{L,ij} + g_{L,ij} Q_{L,ij}} {g_{L,ij} P_{L,ij} - b_{L,ij} Q_{L,ij} - V_i^2 (g_{L,ij}^2 + b_{L,ij}^2)}

A BFS-based propagation algorithm (BFS-PAR) reconstructs all bus angles, relative to any slack reference.

4. Physics-informed Loss Function

The total training objective is a weighted sum: L=εdataLdata+εphyLphy\mathcal{L} = \varepsilon_{data} \mathcal{L}_{data} + \varepsilon_{phy} \mathcal{L}_{phy} with

  • Data-driven Loss Ldata\mathcal{L}_{data}: Penalizes squared deviations between predicted and reference bus states and branch flows:

Ldata=εNBb=1BX~out(b)Xout(b)2+εEBb=1BH~out(b)Hout(b)2\mathcal{L}_{data} = \frac{\varepsilon_N}{B}\sum_{b=1}^B \|\widetilde X_{out}^{(b)} - X_{out}^{(b)}\|^2 + \frac{\varepsilon_E}{B}\sum_{b=1}^B \|\widetilde H_{out}^{(b)} - H_{out}^{(b)}\|^2

  • Physics-driven Loss Lphy\mathcal{L}_{phy}: Enforces physical consistency through:
    • KCL constraint LKCL\mathcal{L}_{KCL}: Penalizes bus-level power imbalance.
    • Branch loss consistency Lloss\mathcal{L}_{loss}: Penalizes inconsistency between predicted and physically computed line losses.
    • Angle difference constraint Langle\mathcal{L}_{angle}: Penalizes deviation in reconstructed angle differences.

This composite loss ensures predictions not only match reference data but also satisfy core power system physical laws.

5. Experimental Evaluation and Benchmarking

Two principal cases are investigated:

  • IEEE 39-bus system: 8,760 base scenarios with varying topology (e.g., generator outages, added buses) are augmented to 1 million LTS samples for training; multiple test sets (including those with unseen scale changes) are used for validation.
  • Provincial grid (300–690 buses): Over 500,000 training and 100,000 test samples generated via LTS; generalization to cases with never-before-seen buses is explicitly tested.

Evaluation Metrics:

  • EVE_V: Maximum voltage magnitude error
  • EθE_\theta: Maximum angle error (reconstructed)
  • ESLE_{SL}: Maximum branch power error
  • EΔSE_{\Delta S}: Maximum bus power imbalance
  • Accuracy (Acc): Fraction of samples meeting thresholds EV0.01E_V \le 0.01, ESL10E_{SL} \le 10 MVA, EΔS10E_{\Delta S} \le 10 MVA

Results:

  • RMGL achieves Acc =82.4%=82.4\% in Gen-U (unseen topology) of IEEE 39-bus case, outperforming baselines (GLP 63.98%63.98\%, GLR 66.87%66.87\%, MGL 66.18%66.18\%).
  • For the real grid, RMGL test accuracy is 99.28%99.28\% (vs. GLP 50.72%50.72\%, GLR 47.09%47.09\%, MGL 96.07%96.07\%).
  • Branch-error is reduced by up to 36.8%36.8\% on the real grid.
  • Ablation: Omitting LTS leads to up to 5×5\times higher errors in generalization. Adding physics losses improves branch loss and angle-difference errors by \sim60\%and and 50\%,respectively.</li><li>UsingRMGLpredictionstoinitializeNewtonRaphsonsolversimprovesconvergencerates(, respectively.</li> <li>Using RMGL predictions to initialize Newton–Raphson solvers improves convergence rates (54.2\% \to 94.3\%)andreducesiterationsby) and reduces iterations by \sim80%80\%.

6. Significance, Limitations, and Future Directions

SaMPFA establishes a scalable, reference-agnostic approach for power flow deep learning, achieving physically consistent predictions under variable-scale topologies. Decoupling bus-voltage and branch-flow prediction removes error amplification linked to angle sensitivity and slack-bus alignment. LTS enables the model to learn scale-invariant power flow patterns and enhances out-of-distribution robustness. Physics-guided loss terms further align network outputs with physical system laws.

Limitations include the requirement for a predefined NmaxN_{max} (with padding for smaller instances), and the current inability to handle truly arbitrary system sizes or dynamics without retraining or further adaptation. Future extensions are anticipated towards multi-system (heterogeneous-size) learning, time-series (dynamic) flows, and rapid domain adaptation to unseen topologies with minimal fine-tuning (Li et al., 4 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Scale-adaptive Multi-task Power Flow Analysis (SaMPFA).