False Data Injection Attacks in CPS
- FDIA is a class of cyber attacks that covertly manipulates sensor measurements in cyber-physical systems by exploiting structural properties to evade standard fault detectors.
- The research spans critical applications from power grids to mobile robotics, leveraging methods like linear column-space attacks, affine transformations, and sophisticated state estimation.
- Advanced detection and mitigation strategies integrate physics-informed models, hybrid estimators, and deep learning to enhance resilience against evolving FDIA methodologies.
False Data Injection Attacks (FDIAs) are a class of cyber attacks targeting cyber-physical systems (CPSs) in which adversaries surreptitiously alter sensor measurements or control commands to manipulate system operation while remaining undetected by standard residual-based fault detectors. FDIA techniques are notable for their capacity to engineer arbitrary biases in state estimates, trajectory tracking, contingency analysis, and system protection schemes—often by exploiting structural properties of the physical models (partial linearity, symmetry) and the availability of measurement or topology information. FDIA research spans critical applications in power systems, mobile robotics, distributed energy resources, and integrated electricity–gas infrastructures, with detection and mitigation strategies evolving toward physics-informed, data-driven, and hybrid frameworks.
1. Mathematical Foundations and Core Models
FDIAs commonly operate under static state estimation models, such as the linearized DC approximation in power grids: , where is the measurement vector, is the system state, is the measurement Jacobian, and is zero-mean Gaussian noise. The adversary injects , forming corrupted measurements . Standard bad-data detection (BDD) raises alarms if the residual exceeds a threshold, with derived from weighted least squares or other estimators.
The hallmark of FDIA stealth lies in the construction for arbitrary , yielding . Under this condition, the estimation residual remains unchanged: . For nonlinear dynamics (e.g., AC state estimation, mobile robot kinematics), a stealthy attack employs , with embodying the physical measurement mapping. In CPSs with dynamic state trajectories or non-holonomic mobility, attackers exploit partial linearity and symmetry in system Jacobians to engineer undetectable deviations, as in the affine transformation-based FDIAs on mobile robot tracking (Ueda et al., 2024).
2. Attack Taxonomies and Structural Vulnerabilities
FDIA threats are prolific across CPS domains due to core vulnerabilities:
- Linearity and Column-Space Attacks: In power grids and DC approximation, any is perfectly stealthy to residual-based detection. The same logic extends to local information: attacks in well-understood network subregions can remain undetectable if boundary states are held fixed and the attack vector respects local measurement relations (Liu et al., 2023, Husnoo et al., 2021).
- Affine Transformations & Symmetry: Nonlinear plants with partial linearity and trigonometric symmetries, e.g., mobile robots, allow for static affine-map attacks—reflection and scaling—that perfectly match the controller’s expectation, with residuals identically zero (Ueda et al., 2024). Scaling attacks compress or expand trajectories, while reflection attacks mirror paths about specific geometric axes.
- Optimized FDIA in AC Networks: In AC state estimation, an attacker must satisfy full nonlinear power-flow constraints with inside an attack zone, ensuring that the corrupted state appears consistent throughout the zone and on boundary buses (Iranpour et al., 2024).
- Knowledge Regimes: Attackers range from full-network knowledge (white-box), partial (gray-box), to subspace learning or black-box (with only data-driven identification). Recent results demonstrate the feasibility of FDIA without a priori line parameters or topology information, using PMU-based Ornstein–Uhlenbeck regression to reconstruct necessary susceptances from observational data (Du et al., 2021, Du et al., 2021).
- Multi-Temporal and Evolutionary FDIAs: Attackers can stage multi-step, time-varying or Markov evolution of to induce slow state drift, evade detection, and exploit batch estimation delay (Bo, 13 Jan 2025).
3. Detection and Mitigation Paradigms
Classical detection relies on model-driven BDD—residual norm (e.g., -test) crossing a threshold. This approach is provably insufficient against stealthy FDIAs, necessitating a spectrum of advanced countermeasures:
- Physics-Informed Functions & SMSF: Nonlinear, non-homogeneous signature functions (e.g., ) integrated into the plant break affine undetectability by embedding state-based hashes that cannot be matched by simple scaling or reflection. Discrepancy in SMSF observables between plant and controller signals immediate tampering (Ueda et al., 2024).
- Hybrid Estimators: The CHIMERA framework fuses weighted static losses with LSTM-derived dynamic losses, enforcing both spatial and temporal consistency across measurement windows. This approach mitigates >90% of N–2 contingency-altering FDIAs in simulation, outperforming purely physics-based or purely ML methods (Liu et al., 2021).
- Trajectory and State Prediction: LSTM and GNN + LSTM predictors trained on normal system dynamics can detect FDIAs by identifying discrepancies between forecasted and measured state trajectories, even under measurement noise. Adaptive cyclic deployment enhances computational tractability while preserving detection accuracy (Sahu et al., 2024).
- Machine Learning and Graph-Based Techniques: ML classifiers—including SVMs, random forests, CNNs, RNNs, autoencoders—are widely deployed for FDIA binary and multi-label detection, often achieving 95–99% accuracy. GNNs that encode grid topology and spatial correlation further improve detection robustness by capturing physics-induced locality in meter data (Boyaci et al., 2021, Wu et al., 2023). State-of-the-art GAT-augmented causal-inference methods enhance interpretability and drift resilience, achieving near-perfect F1 localization (Wu et al., 2023).
- Adversarial Robustness in Deep Learning: Recent works document high vulnerability of DLSs (MLP, CNN, LSTM, ResNet) to adversarial FDIAs (e.g., FGSM), with misclassification rates >90% in some architectures. Adversarial training substantially hardens these models, reducing fooling rates below 5% without degrading genuine fault detection (Saber et al., 24 Jun 2025, Li et al., 2021).
- Random Input Padding: In DNN-based FDIA detection, random input padding demonstrates resilience to transferability of adversarial examples, restoring detection rates to 90–95% with minimal impact on clean-data accuracy (Li et al., 2021).
4. Systemic Impact and Defense Strategies
FDIAs have demonstrated capacity for high-impact manipulations:
- Trajectory Manipulation: In mobile robot platforms, affine FDIAs can entirely redirect robot paths while controller error metrics converge to zero, leaving traditional residual-based monitors blinded (Ueda et al., 2024).
- Contingency Screening and Power Flow: In power grids, FDIAs can induce mis-estimation of contingency counts (number of overloaded lines), manipulate real and reactive flow, and disrupt optimal power flow decisions. Optimally designed AC FDIAs can raise line flows by 50% while inducing sub-milliper-unit residuals, defeating classic BDD thresholds (Iranpour et al., 2024, Liu et al., 2021).
- Voltage Regulation in EV-Integrated Grids: Sophisticated FDIAs, aware of stochastic EV mobility and communication packet losses, can stealthily compromise voltage regulation capacity estimation, maximizing attack impact subject to BDD constraints via convex SOCP optimization (Liu et al., 2022).
- Integrated Energy Systems: FDIA construction in IEGS must respect coupling constraints between power and gas subnetworks. Even with only topological information, an attacker can stage stealthy compressor flow redistribution on the gas side (Liu et al., 2023).
- Local and Distributed Architectures: In residential DR, aggregation-level FDIAs targeting demand forecasts or real-time price signals can confer significant financial advantage to attackers or destabilize supply-demand equilibrium. Human-in-the-loop and privacy-preserving operation, plus cluster-based correction and anomaly detection, mitigate per-device bill impacts to near-zero while preserving system performance (Dayaratne et al., 2023).
- Cyber-Physical Risk Assessment: Attack success depends on both intrusion cost (graph-theoretic set-cover of RTUs/meters) and physical vulnerability (how much moving-target defense divergence is required to render the attack detectable). Combining cyber and physical metrics yields a quantitative risk ranking, structuring defense investment (Higgins et al., 2022).
5. Evolution of FDIA Construction Methodologies
FDIA implementations are increasingly evolving beyond static, single-shot attacks:
- Moving Horizon FDIA (MH-FDIA): Classical static FDIAs lose recursive feasibility in windowed batch estimators; only MH-FDIA approaches, which enforce historical consistency across the moving window, can bypass sliding-mode BDD. MH-FDIA formulations solve constrained nonconvex optimization to maximize attack bias under per-window stealth constraints (Zheng et al., 2023).
- Autoencoder and GAN-Based Reconstruction: Upon detection, state-aware reconstruction replaces corrupted data using deep generative models (GAN, VAE) trained on historical normal measurements, preserving observability and minimizing estimation error. These approaches decouple attacked and un-attacked regions, restoring grid monitoring (Bo, 13 Jan 2025).
- Causality-Inference Localization: Causal graphs generated via X-learner algorithms, followed by multi-head GAT classifiers, reveal FDIA-induced violations of physical laws (Ohm’s/Kirchhoff’s), allowing interpretable, drift-robust localization of attacked buses or meters (Wu et al., 2023).
6. Challenges, Open Problems, and Future Directions
FDIA research faces enduring challenges across several axes:
- Interpretability and Explainability: ML-based FDIA detection suffers from poor model interpretability. Hybrid physics-informed neural networks and causal inference approaches are advocated to close this gap (Bo, 13 Jan 2025, Wu et al., 2023).
- Adversarial ML & Robustness: DLSs are highly vulnerable to adversarial FDIAs. Adversarial training, random padding, and graph-based representation learning offer promising but computationally intensive hardening (Saber et al., 24 Jun 2025, Li et al., 2021).
- Concept Drift, Data Scarcity, and Transferability: Distributional shifts, rare attack samples, and topology changes impair model adaptation. Federated self-supervised learning, GAN-based augmentation, and cross-domain fusion are active directions (Bo, 13 Jan 2025).
- Real-World Deployment and Evaluation: The absence of standardized, large-scale, real-event FDIA datasets and the challenge of hardware-in-the-loop validation limit robust benchmarking. Emphasis is placed on collaborative dataset curation and online, low-latency streaming inference (Husnoo et al., 2021, Irfan et al., 2023).
- Integrated Energy System Security: Extending FDIA frameworks to multi-energy and blockchained transactive systems remains an open frontier, with coupling constraints and cross-domain attack propagation posing particularly difficult theoretical and practical challenges (Liu et al., 2023, Bo, 13 Jan 2025).
- Detection–Mitigation Closed-Loop Resilience: Action-control reconstruction, sliding-mode observers, feedback-gain modulation, and digital twins represent emerging paradigms for maintaining reliable operation post-FDIA detection (Bo, 13 Jan 2025).
7. Representative FDIA Attack and Defense Structures
| Attack Mode / Defense | Description |
|---|---|
| Linear column-space stealth attack | |
| Affine reflection/scaling | Kinematic attacks on mobile robots exploiting Jacobian symmetry (Ueda et al., 2024) |
| Nonlinear | Full AC attack with attack zone boundary fixing (Iranpour et al., 2024, Du et al., 2021) |
| Local region attack | Only local measurements/parameters used, boundary states fixed (Liu et al., 2023) |
| SMSF / signature function | Nonlinear, asymmetric plant function for FDIA detection via output-state hash (Ueda et al., 2024) |
| LSTM/GNN state prediction | Detection via model-driven error between predicted and measured trajectories (Sahu et al., 2024) |
| Random padding defense | DNN input randomization to mitigate adversarial transferability (Li et al., 2021) |
| Adversarial training | Augmentation of DNN training samples with FGSM-crafted FDIAs (Saber et al., 24 Jun 2025) |
The technical landscape of FDIA research, spanning attack vector construction, taxonomy of affected systems, and innovative detection or mitigation schemes, underscores the need for continued development of interpretable, robust, and physically-informed frameworks for security in cyber-physical systems.