Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 97 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 29 tok/s
GPT-5 High 26 tok/s Pro
GPT-4o 86 tok/s
GPT OSS 120B 452 tok/s Pro
Kimi K2 211 tok/s Pro
2000 character limit reached

A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Decentralized Inverter-based Voltage Control (2312.04371v1)

Published 7 Dec 2023 in math.OC, cs.LG, cs.MA, cs.SY, and eess.SY

Abstract: This paper addresses the challenges associated with decentralized voltage control in power grids due to an increase in distributed generations (DGs). Traditional model-based voltage control methods struggle with the rapid energy fluctuations and uncertainties of these DGs. While multi-agent reinforcement learning (MARL) has shown potential for decentralized secondary control, scalability issues arise when dealing with a large number of DGs. This problem lies in the dominant centralized training and decentralized execution (CTDE) framework, where the critics take global observations and actions. To overcome these challenges, we propose a scalable network-aware (SNA) framework that leverages network structure to truncate the input to the critic's Q-function, thereby improving scalability and reducing communication costs during training. Further, the SNA framework is theoretically grounded with provable approximation guarantee, and it can seamlessly integrate with multiple multi-agent actor-critic algorithms. The proposed SNA framework is successfully demonstrated in a system with 114 DGs, providing a promising solution for decentralized voltage control in increasingly complex power grid systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. A. I. Osman, L. Chen, M. Yang, G. Msigwa, M. Farghali, S. Fawzy, D. W. Rooney, and P.-S. Yap, “Cost, environmental impact, and resilience of renewable energy under a changing climate: A review,” Environmental Chemistry Letters, vol. 21, no. 2, pp. 741–764, Apr. 2023.
  2. H. Sun, Q. Guo, J. Qi, V. Ajjarapu, R. Bravo, J. Chow, Z. Li, R. Moghe, E. Nasr-Azadani, U. Tamrakar, G. N. Taranto, R. Tonkoski, G. Valverde, Q. Wu, and G. Yang, “Review of Challenges and Research Opportunities for Voltage Control in Smart Grids,” IEEE Transactions on Power Systems, vol. 34, no. 4, pp. 2790–2801, Jul. 2019.
  3. A. Hammad, “Comparing the voltage control capabilities of present and future VAr compensating techniques in transmission systems,” IEEE Transactions on Power Delivery, vol. 11, no. 1, pp. 475–484, Jan./1996.
  4. M. Attar, O. Homaee, H. Falaghi, and P. Siano, “A novel strategy for optimal placement of locally controlled voltage regulators in traditional distribution systems,” International Journal of Electrical Power & Energy Systems, vol. 96, pp. 11–22, Mar. 2018.
  5. R. H. Lasseter, Z. Chen, and D. Pattabiraman, “Grid-Forming Inverters: A Critical Asset for the Power Grid,” IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 8, no. 2, pp. 925–935, Jun. 2020.
  6. A. Bidram and A. Davoudi, “Hierarchical Structure of Microgrids Control System,” IEEE Transactions on Smart Grid, vol. 3, no. 4, pp. 1963–1976, Dec. 2012.
  7. Y. Sun, X. Hou, J. Yang, H. Han, M. Su, and J. M. Guerrero, “New Perspectives on Droop Control in AC Microgrid,” IEEE Transactions on Industrial Electronics, vol. 64, no. 7, pp. 5741–5745, Jul. 2017.
  8. A. Bidram, A. Davoudi, F. L. Lewis, and Z. Qu, “Secondary control of microgrids based on distributed cooperative control of multi-agent systems,” IET Generation, Transmission & Distribution, vol. 7, no. 8, pp. 822–831, 2013.
  9. D. E. Olivares, A. Mehrizi-Sani, A. H. Etemadi, C. A. Cañizares, R. Iravani, M. Kazerani, A. H. Hajimiragha, O. Gomis-Bellmunt, M. Saeedifard, R. Palma-Behnke, G. A. Jiménez-Estévez, and N. D. Hatziargyriou, “Trends in Microgrid Control,” IEEE Transactions on Smart Grid, vol. 5, no. 4, pp. 1905–1919, Jul. 2014.
  10. K. S. Rajesh, S. S. Dash, R. Rajagopal, and R. Sridhar, “A review on control of ac microgrid,” Renewable and Sustainable Energy Reviews, vol. 71, pp. 814–819, May 2017.
  11. P. N. Vovos, A. E. Kiprakis, A. R. Wallace, and G. P. Harrison, “Centralized and Distributed Voltage Control: Impact on Distributed Generation Penetration,” IEEE Transactions on Power Systems, vol. 22, no. 1, pp. 476–483, Feb. 2007.
  12. V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems and control,” Automatica, vol. 36, no. 9, pp. 1249–1274, Sep. 2000.
  13. N. M. Dehkordi, H. R. Baghaee, N. Sadati, and J. M. Guerrero, “Distributed Noise-Resilient Secondary Voltage and Frequency Control for Islanded Microgrids,” IEEE TRANSACTIONS ON SMART GRID, vol. 10, no. 4, 2019.
  14. J. W. Simpson-Porco, Q. Shafiee, F. Dorfler, J. C. Vasquez, J. M. Guerrero, and F. Bullo, “Secondary Frequency and Voltage Control of Islanded Microgrids via Distributed Averaging,” IEEE Transactions on Industrial Electronics, vol. 62, no. 11, pp. 7025–7038, Nov. 2015.
  15. P. Chen, S. Liu, B. Chen, and L. Yu, “Multi-Agent Reinforcement Learning for Decentralized Resilient Secondary Control of Energy Storage Systems Against DoS Attacks,” IEEE Transactions on Smart Grid, vol. 13, no. 3, pp. 1739–1750, May 2022.
  16. L. Canese, G. C. Cardarilli, L. Di Nunzio, R. Fazzolari, D. Giardino, M. Re, and S. Spanò, “Multi-Agent Reinforcement Learning: A Review of Challenges and Applications,” Applied Sciences, vol. 11, no. 11, p. 4948, Jan. 2021.
  17. D. Chen, K. Chen, Z. Li, T. Chu, R. Yao, F. Qiu, and K. Lin, “PowerNet: Multi-Agent Deep Reinforcement Learning for Scalable Powergrid Control,” IEEE Transactions on Power Systems, vol. 37, no. 2, pp. 1007–1017, Mar. 2022.
  18. Q. Yang, L. Yan, X. Chen, Y. Chen, and J. Wen, “A Distributed Dynamic Inertia-droop Control Strategy Based on Multi-Agent Deep Reinforcement Learning for Multiple Paralleled VSGs,” IEEE Transactions on Power Systems, pp. 1–15, 2022.
  19. H. Liu and W. Wu, “Online Multi-Agent Reinforcement Learning for Decentralized Inverter-Based Volt-VAR Control,” IEEE Transactions on Smart Grid, vol. 12, no. 4, pp. 2980–2990, Jul. 2021.
  20. G. Qu, A. Wierman, and N. Li, “Scalable Reinforcement Learning for Multi-Agent Networked Systems,” Oct. 2021.
  21. Y. Ye, Y. Tang, H. Wang, X.-P. Zhang, and G. Strbac, “A Scalable Privacy-Preserving Multi-Agent Deep Reinforcement Learning Approach for Large-Scale Peer-to-Peer Transactive Energy Trading,” IEEE Transactions on Smart Grid, vol. 12, no. 6, pp. 5185–5200, Nov. 2021.
  22. S. Iqbal and F. Sha, “Actor-Attention-Critic for Multi-Agent Reinforcement Learning.”
  23. D. Zhou and V. V. Gayah, “Scalable multi-region perimeter metering control for urban networks: A multi-agent deep reinforcement learning approach,” Transportation Research Part C: Emerging Technologies, vol. 148, p. 104033, Mar. 2023.
  24. R. Wang, S. Bu, and C. Y. Chung, “Real-Time Joint Regulations of Frequency and Voltage for TSO-DSO Coordination: A Deep Reinforcement Learning-Based Approach,” IEEE Transactions on Smart Grid, pp. 1–1, 2023.
  25. X. Hou, K. Sun, N. Zhang, F. Teng, X. Zhang, and T. C. Green, “Priority-Driven Self-Optimizing Power Control Scheme for Interlinking Converters of Hybrid AC/DC Microgrid Clusters in Decentralized Manner,” IEEE Transactions on Power Electronics, vol. 37, no. 5, pp. 5970–5983, May 2022.
  26. A. Bidram, A. Davoudi, F. L. Lewis, and J. M. Guerrero, “Distributed Cooperative Secondary Control of Microgrids Using Feedback Linearization,” IEEE Transactions on Power Systems, vol. 28, no. 3, pp. 3462–3470, Aug. 2013.
  27. J. M. Guerrero, J. C. Vasquez, J. Matas, L. G. de Vicuna, and M. Castilla, “Hierarchical Control of Droop-Controlled AC and DC Microgrids—A General Approach Toward Standardization,” IEEE Transactions on Industrial Electronics, vol. 58, no. 1, pp. 158–172, Jan. 2011.
  28. W. Long, D. Cotcher, D. Ruiu, P. Adam, S. Lee, and R. Adapa, “EMTP-a powerful tool for analyzing power system transients,” IEEE Computer Applications in Power, vol. 3, no. 3, pp. 36–41, Jul. 1990.
  29. R. Hossain, M. Gautam, M. M. Lakouraj, H. Livani, and M. Benidris, “Volt-VAR Optimization in Distribution Networks Using Twin Delayed Deep Reinforcement Learning,” in 2022 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Apr. 2022, pp. 1–5.
  30. K. Ota, T. Oiki, D. Jha, T. Mariyama, and D. Nikovski, “Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?” in Proceedings of the 37th International Conference on Machine Learning.   PMLR, Nov. 2020, pp. 7424–7433.
  31. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,” Aug. 2018.
  32. R. D. Zimmerman, C. E. Murillo-Sanchez, and R. J. Thomas, “MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education,” IEEE Transactions on Power Systems, vol. 26, no. 1, pp. 12–19, Feb. 2011.
Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube