Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Power System Emergency Control using Deep Reinforcement Learning (1903.03712v2)

Published 9 Mar 2019 in cs.LG, cs.SY, and stat.ML

Abstract: Power system emergency control is generally regarded as the last safety net for grid security and resiliency. Existing emergency control schemes are usually designed off-line based on either the conceived "worst" case scenario or a few typical operation scenarios. These schemes are facing significant adaptiveness and robustness issues as increasing uncertainties and variations occur in modern electrical grids. To address these challenges, for the first time, this paper developed novel adaptive emergency control schemes using deep reinforcement learning (DRL), by leveraging the high-dimensional feature extraction and non-linear generalization capabilities of DRL for complex power systems. Furthermore, an open-source platform named RLGC has been designed for the first time to assist the development and benchmarking of DRL algorithms for power system control. Details of the platform and DRL-based emergency control schemes for generator dynamic braking and under-voltage load shedding are presented. Extensive case studies performed in both two-area four-machine system and IEEE 39-Bus system have demonstrated the excellent performance and robustness of the proposed schemes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qiuhua Huang (27 papers)
  2. Renke Huang (23 papers)
  3. Weituo Hao (16 papers)
  4. Jie Tan (85 papers)
  5. Rui Fan (113 papers)
  6. Zhenyu Huang (18 papers)
Citations (254)

Summary

  • The paper demonstrates that deep reinforcement learning significantly improves power system emergency control by optimizing generator dynamic braking and under-voltage load shedding.
  • It frames emergency control as a Markov decision process, employing DRL algorithms like DQN and PPO to achieve robustness under uncertainty and disturbances.
  • The research illustrates DRL's scalability and real-time effectiveness, outperforming traditional methods and offering a path for future grid innovations.

Adaptive Power System Emergency Control using Deep Reinforcement Learning

The paper presents an innovative approach to address the challenges in adaptive emergency control for power systems, utilizing deep reinforcement learning (DRL) methodologies. The authors develop novel schemes that enhance the adaptiveness and robustness of control solutions under the uncertainties and disturbances faced by modern electrical grids. By leveraging DRL's strengths in high-dimensional feature extraction and non-linear generalization, the authors tackle the intricate task of power system emergency control, particularly focusing on generator dynamic braking and under-voltage load shedding (UVLS).

Overview and Concepts

Power system emergency control is crucial to maintaining grid security and resilience. Traditional control schemes are customarily designed offline, emphasizing either worst-case or typical scenarios, thereby lacking adaptability in real-time operational contexts with increasing uncertainties. To address these issues, the paper employs the power of DRL, a method that has shown efficacy in sequential decision-making situations with high levels of unpredictability.

DRL combines reinforcement learning (RL) with deep learning networks, allowing for direct use of raw state representations and enabling scalable solutions for complex systems and tasks. This paper explores DRL’s application in power systems through an open-source platform, Reinforcement Learning for Grid Control (RLGC), designed to facilitate the development and evaluation of DRL algorithms specific to power control tasks.

Methodology and Implementation

The paper provides a detailed overview of how power system emergency control problems can be framed as Markov decision processes (MDPs) and addressed using DRL. These control problems are fundamentally dynamic, involving decision-making under uncertain conditions. DRL algorithms, such as Deep Q-Network (DQN) and Proximal Policy Optimization (PPO), are employed for their robust feature extraction and adaptive control capabilities.

The authors present two case studies to illustrate the efficacy of the developed DRL framework:

  1. Generator Dynamic Braking: This control scheme addresses the stability issues caused by large disturbances by activating a resistive brake. The DRL model learns to initiate braking selectively, using generator rotor speed and angle as inputs. This adaptive scheme outperforms traditional Q-learning methods, exhibiting improved robustness against parameter uncertainties and observation noise.
  2. Under-voltage Load Shedding: To tackle fault-induced delayed voltage recovery (FIDVR), DRL is utilized to determine precise load shedding actions at critical buses. The DRL model adeptly navigates the complexities of voltage recovery post-disturbance, outperforming conventional protection strategies and model-dependent methods like Model Predictive Control (MPC).

The paper emphasizes the importance of meticulous problem formulation for DRL applications, including defining observations, actions, and reward structures that align with power system operational objectives.

Implications and Future Directions

The research implications are significant, as the paper not only demonstrates the potential of DRL in enhancing power system emergency controls but also addresses the scalability and robustness required for real-world grid applications. Some critical strengths include the model-free nature of DRL, its rapid computation times suitable for real-time applications, and its robustness to modeling inaccuracies and varying system conditions.

Future research would benefit from enhancing the RLGC platform's capabilities, extending DRL's application to larger power systems and other control actions, and integrating safety guarantees in learning algorithms. Exploring more advanced reinforcement learning techniques may further strengthen these adaptive control solutions.

In conclusion, the paper provides a comprehensive paper on the application of DRL to emergency control of power systems, demonstrating that DRL-based approaches can effectively adapt to and manage the complexities and uncertainties inherent in modern electrical grids.