Papers
Topics
Authors
Recent
Search
2000 character limit reached

Safe Reinforcement Learning for Emergency LoadShedding of Power Systems

Published 17 Nov 2020 in eess.SY and cs.SY | (2011.09664v1)

Abstract: The paradigm shift in the electric power grid necessitates a revisit of existing control methods to ensure the grid's security and resilience. In particular, the increased uncertainties and rapidly changing operational conditions in power systems have revealed outstanding issues in terms of either speed, adaptiveness, or scalability of the existing control methods for power systems. On the other hand, the availability of massive real-time data can provide a clearer picture of what is happening in the grid. Recently, deep reinforcement learning(RL) has been regarded and adopted as a promising approach leveraging massive data for fast and adaptive grid control. However, like most existing ML-basedcontrol techniques, RL control usually cannot guarantee the safety of the systems under control. In this paper, we introduce a novel method for safe RL-based load shedding of power systems that can enhance the safe voltage recovery of the electric power grid after experiencing faults. Numerical simulations on the 39-bus IEEE benchmark is performed to demonstrate the effectiveness of the proposed safe RL emergency control, as well as its adaptive capability to faults not seen in the training.

Citations (17)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.