Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward Intelligent Emergency Control for Large-scale Power Systems: Convergence of Learning, Physics, Computing and Control (2310.05021v1)

Published 8 Oct 2023 in eess.SY and cs.SY

Abstract: This paper has delved into the pressing need for intelligent emergency control in large-scale power systems, which are experiencing significant transformations and are operating closer to their limits with more uncertainties. Learning-based control methods are promising and have shown effectiveness for intelligent power system control. However, when they are applied to large-scale power systems, there are multifaceted challenges such as scalability, adaptiveness, and security posed by the complex power system landscape, which demand comprehensive solutions. The paper first proposes and instantiates a convergence framework for integrating power systems physics, machine learning, advanced computing, and grid control to realize intelligent grid control at a large scale. Our developed methods and platform based on the convergence framework have been applied to a large (more than 3000 buses) Texas power system, and tested with 56000 scenarios. Our work achieved a 26% reduction in load shedding on average and outperformed existing rule-based control in 99.7% of the test scenarios. The results demonstrated the potential of the proposed convergence framework and DRL-based intelligent control for the future grid.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Z. Yan and Y. Xu, “Data-driven load frequency control for stochastic power systems: A deep reinforcement learning method with continuous action search,” IEEE Transactions on Power Systems, vol. 34, no. 2, pp. 1653–1656, 2019.
  2. Q. Huang, R. Huang, W. Hao, J. Tan, R. Fan, and Z. Huang, “Adaptive power system emergency control using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1171–1182, 2020.
  3. C. Chen, M. Cui, F. F. Li, S. Yin, and X. Wang, “Model-free emergency frequency control based on reinforcement learning,” IEEE Transactions on Industrial Informatics, 2020.
  4. M. Glavic, “(deep) reinforcement learning for electric power system control and related problems: A short review and perspectives,” Annual Reviews in Control, 2019.
  5. X. Chen, G. Qu, Y. Tang, S. Low, and N. Li, “Reinforcement learning for selective key applications in power systems: Recent advances and future challenges,” IEEE Transactions on Smart Grid, vol. 13, no. 4, pp. 2935–2958, 2022.
  6. Y. Li, C. Yu, M. Shahidehpour, T. Yang, Z. Zeng, and T. Chai, “Deep reinforcement learning for smart grid operations: Algorithms, applications, and prospects,” Proceedings of the IEEE, vol. 111, no. 9, pp. 1055–1096, 2023.
  7. R. Huang, Y. Chen, T. Yin, X. Li, A. Li, J. Tan, W. Yu, Y. Liu, and Q. Huang, “Accelerated derivative-free deep reinforcement learning for large-scale grid emergency voltage control,” IEEE Transactions on Power Systems, vol. 37, no. 1, pp. 14–25, 2022.
  8. J. Weng, M. Lin, S. Huang, B. Liu, D. Makoviichuk, V. Makoviychuk, Z. Liu, Y. Song, T. Luo, Y. Jiang, Z. Xu, and S. Yan, “Envpool: A highly parallel reinforcement learning environment execution engine,” 2022.
  9. Anyscale, “Ray: An open-source unified framework for scaling ai and python applications,” https://www.ray.io/, accessed: 2023-9-27.
  10. B. Palmer, W. Perkins, Y. Chen, S. Jin, D. Callahan, K. Glass, R. Diao, M. Rice, S. Elbert, M. Vallem, and Z. Huang, “Gridpack: A framework for developing power grid simulations on high performance computing platforms,” in 2014 Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, 2014, pp. 68–77.
  11. PNNL, “GridPACK.” [Online]. Available: https://github.com/GridOPTICS/GridPACK
  12. R. Huang, Y. Chen, T. Yin, Q. Huang, J. Tan, W. Yu, X. Li, A. Li, and Y. Du, “Learning and fast adaptation for grid emergency control via deep meta reinforcement learning,” IEEE Transactions on Power Systems, vol. 37, no. 6, pp. 4168–4178, 2022.
  13. Huang, Renke and Huang, Qiuhua and Yin, Tianzhixi, and Palmer, Bruce, and Li, Ang, “GridPACK.” [Online]. Available: https://github.com/pnnl/hadrec
  14. WECC, “Tpl-001-wecc-crt-4—transmission system planning performance,” https://www.wecc.org/Reliability/TPL-001-WECC-CRT-4.pdf, accessed: 2023-9-28.
  15. Y. Du, Q. Huang, R. Huang, T. Yin, J. Tan, W. Yu, and X. Li, “Physics-informed evolutionary strategy based control for mitigating delayed voltage recovery,” IEEE Transactions on Power Systems, vol. 37, no. 5, pp. 3516–3527, 2022.
  16. T. Xu, A. B. Birchfield, K. S. Shetye, and T. J. Overbye, “Creation of synthetic electric grid models for transient stability studies,” in The 10th Bulk Power Systems Dynamics and Control Symposium (IREP 2017), 2017, pp. 1–6.
  17. X. Wang, Y. Chen, and W. Zhu, “A survey on curriculum learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4555–4576, 2021.
  18. X. Sun, X. Li, S. Datta, X. Ke, Q. Huang, R. Huang, and Z. J. Hou, “Smart sampling for reduced and representative power system scenario selection,” IEEE Open Access Journal of Power and Energy, vol. 8, pp. 293–302, 2021.

Summary

We haven't generated a summary for this paper yet.