Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safe Reinforcement Learning for Emergency LoadShedding of Power Systems (2011.09664v1)

Published 17 Nov 2020 in eess.SY and cs.SY

Abstract: The paradigm shift in the electric power grid necessitates a revisit of existing control methods to ensure the grid's security and resilience. In particular, the increased uncertainties and rapidly changing operational conditions in power systems have revealed outstanding issues in terms of either speed, adaptiveness, or scalability of the existing control methods for power systems. On the other hand, the availability of massive real-time data can provide a clearer picture of what is happening in the grid. Recently, deep reinforcement learning(RL) has been regarded and adopted as a promising approach leveraging massive data for fast and adaptive grid control. However, like most existing ML-basedcontrol techniques, RL control usually cannot guarantee the safety of the systems under control. In this paper, we introduce a novel method for safe RL-based load shedding of power systems that can enhance the safe voltage recovery of the electric power grid after experiencing faults. Numerical simulations on the 39-bus IEEE benchmark is performed to demonstrate the effectiveness of the proposed safe RL emergency control, as well as its adaptive capability to faults not seen in the training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Thanh Long Vu (25 papers)
  2. Sayak Mukherjee (31 papers)
  3. Tim Yin (1 paper)
  4. Renke Huang (23 papers)
  5. and Jie Tan (1 paper)
  6. Qiuhua Huang (27 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.