Deep Reinforcement Learning for Voltage Control and Renewable Accommodation Using Spatial-Temporal Graph Information (2401.15848v1)
Abstract: Renewable energy resources (RERs) have been increasingly integrated into distribution networks (DNs) for decarbonization. However, the variable nature of RERs introduces uncertainties to DNs, frequently resulting in voltage fluctuations that threaten system security and hamper the further adoption of RERs. To incentivize more RER penetration, we propose a deep reinforcement learning (DRL)-based strategy to dynamically balance the trade-off between voltage fluctuation control and renewable accommodation. To further extract multi-time-scale spatial-temporal (ST) graphical information of a DN, our strategy draws on a multi-grained attention-based spatial-temporal graph convolution network (MG-ASTGCN), consisting of ST attention mechanism and ST convolution to explore the node correlations in the spatial and temporal views. The continuous decision-making process of balancing such a trade-off can be modeled as a Markov decision process optimized by the deep deterministic policy gradient (DDPG) algorithm with the help of the derived ST information. We validate our strategy on the modified IEEE 33, 69, and 118-bus radial distribution systems, with outcomes significantly outperforming the optimization-based benchmarks. Simulations also reveal that our developed MG-ASTGCN can to a great extent accelerate the convergence speed of DDPG and improve its performance in stabilizing node voltage in an RER-rich DN. Moreover, our method improves the DN's robustness in the presence of generator failures.
- M. Meinshausen, J. Lewis, C. McGlade, J. Gütschow, Z. Nicholls, R. Burdon, L. Cozzi, and B. Hackmann, “Realization of Paris Agreement pledges may limit warming just below 2 °C,” Nature, vol. 604, no. 7905, pp. 304–309, apr 2022.
- S. R. Sinsel, R. L. Riemke, and V. H. Hoffmann, “Challenges and solution technologies for the integration of variable renewable energy sources—a review,” Renewable Energy, vol. 145, pp. 2271–2285, 2020.
- K. E. Antoniadou-Plytaria, I. N. Kouveliotis-Lysikatos, P. S. Georgilakis, and N. D. Hatziargyriou, “Distributed and decentralized voltage control of smart distribution networks: Models, methods, and future research,” IEEE Transactions on Smart Grid, vol. 8, no. 6, pp. 2999–3008, 2017.
- S. Bolognani, G. Cavraro, R. Carli, and S. Zampieri, “Distributed reactive power feedback control for voltage regulation and loss minimization,” IEEE Transactions on Automatic Control, vol. 60, 03 2014.
- B. Zhang, A. Y. Lam, A. D. Domínguez-García, and D. Tse, “An optimal and distributed method for voltage regulation in power distribution systems,” IEEE Transactions on Power Systems, vol. 30, no. 4, pp. 1714–1726, 2015.
- A. Maknouninejad and Z. Qu, “Realizing unified microgrid voltage profile and loss minimization: A cooperative distributed optimization and control approach,” IEEE Transactions on Smart Grid, vol. 5, no. 4, pp. 1621–1630, 2014.
- K. Utkarsh, A. Trivedi, D. Srinivasan, and T. Reindl, “A consensus-based distributed computational intelligence technique for real-time optimal control in smart distribution grids,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 1, pp. 1–1, 12 2016.
- E. Dall’Anese, H. Zhu, and G. B. Giannakis, “Distributed optimal power flow for smart microgrids,” IEEE Transactions on Smart Grid, vol. 4, no. 3, pp. 1464–1475, 2013.
- W. Zheng, W. Wu, B. Zhang, H. Sun, and Y. Liu, “A fully distributed reactive power optimization and control method for active distribution networks,” IEEE Transactions on Smart Grid, vol. 7, no. 2, pp. 1021–1033, 2016.
- B. A. Robbins, H. Zhu, and A. D. Domínguez-García, “Optimal tap setting of voltage regulation transformers in unbalanced distribution systems,” IEEE Transactions on Power Systems, vol. 31, no. 1, pp. 256–267, 2016.
- B. A. Robbins and A. D. Domínguez-García, “Optimal reactive power dispatch for voltage regulation in unbalanced distribution systems,” IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 2903–2913, 2016.
- L. Yu, D. Czarkowski, and F. de Leon, “Optimal distributed voltage regulation for secondary networks with dgs,” IEEE Transactions on Smart Grid, vol. 3, no. 2, pp. 959–967, 2012.
- A. R. Di Fazio, G. Fusco, and M. Russo, “Decentralized control of distributed generation for voltage profile optimization in smart feeders,” IEEE Transactions on Smart Grid, vol. 4, no. 3, pp. 1586–1596, 2013.
- E. Dall’Anese, S. V. Dhople, and G. B. Giannakis, “Photovoltaic inverter controllers seeking ac optimal power flow solutions,” IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 2809–2823, 2016.
- A. Abessi, V. Vahidinasab, and M. S. Ghazizadeh, “Centralized support distributed voltage control by using end-users as reactive power support,” IEEE Transactions on Smart Grid, vol. 7, no. 1, pp. 178–188, 2016.
- M. Nayeripour, H. Sobhani, E. Waffenschmidt, and S. Hasanvand, “Coordinated online voltage management of distributed generation using network partitioning,” Electric Power Systems Research, vol. 141, pp. 202–209, 12 2016.
- K. Mahmoud, M. M. Hussein, M. Abdel-Nasser, and M. Lehtonen, “Optimal voltage control in distribution systems with intermittent pv using multiobjective grey-wolf-lévy optimizer,” IEEE Systems Journal, vol. 14, no. 1, pp. 760–770, 2020.
- A. Routray, R. K. Singh, and R. Mahanty, “Harmonic reduction in hybrid cascaded multilevel inverter using modified grey wolf optimization,” IEEE Transactions on Industry Applications, vol. 56, no. 2, pp. 1827–1838, 2020.
- W. Wang, N. Yu, Y. Gao, and J. Shi, “Safe off-policy deep reinforcement learning algorithm for volt-var control in power distribution systems,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3008–3018, 2020.
- D. Cao, W. Hu, J. Zhao, Q. Huang, Z. Chen, and F. Blaabjerg, “A multi-agent deep reinforcement learning based voltage regulation using coordinated pv inverters,” IEEE Transactions on Power Systems, vol. 35, no. 5, pp. 4120–4123, 2020.
- Q. Yang, G. Wang, A. Sadeghi, G. B. Giannakis, and J. Sun, “Two-timescale voltage control in distribution grids using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313–2323, 2020.
- Y. Zhang, X. Wang, J. Wang, and Y. Zhang, “Deep reinforcement learning based volt-var optimization in smart distribution systems,” IEEE Transactions on Smart Grid, vol. 12, no. 1, pp. 361–371, 2021.
- D. Cao, J. Zhao, W. Hu, F. Ding, Q. Huang, Z. Chen, and F. Blaabjerg, “Data-driven multi-agent deep reinforcement learning for distribution system decentralized voltage control with high penetration of pvs,” IEEE Transactions on Smart Grid, vol. 12, no. 5, pp. 4137–4150, 2021.
- X. Sun and J. Qiu, “Two-stage volt/var control in active distribution networks with multi-agent deep reinforcement learning method,” IEEE Transactions on Smart Grid, vol. 12, no. 4, pp. 2903–2912, 2021.
- H. Liu and W. Wu, “Two-stage deep reinforcement learning for inverter-based volt-var control in active distribution networks,” IEEE Transactions on Smart Grid, vol. 12, no. 3, pp. 2037–2047, 2021.
- A. Panda and M. Tripathy, “Security constrained optimal power flow solution of wind-thermal generation system using modified bacteria foraging algorithm,” Energy, vol. 93, pp. 816–827, 12 2015.
- R. Ma, X. Li, Y. Luo, X. Wu, and F. Jiang, “Multi-objective dynamic optimal power flow of wind integrated power systems considering demand response,” CSEE Journal of Power and Energy Systems, vol. 5, no. 4, pp. 466–473, 2019.
- J. Woo, L. wu, J.-B. Park, and J. Roh, “Real-time optimal power flow using twin delayed deep deterministic policy gradient algorithm,” IEEE Access, vol. 8, pp. 213 611–213 618, 01 2020.
- X. Shi, H. Qi, Y. Shen, G. Wu, and B. Yin, “A spatial–temporal attention approach for traffic prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 4909–4918, 2021.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
- S. Zhang, H. Tong, J. Xu, and R. Maciejewski, “Graph convolutional networks: a comprehensive review,” Computational Social Networks, vol. 6, no. 1, p. 11, Nov 2019.
- M. Simonovsky and N. Komodakis, “Dynamic edge-conditioned filters in convolutional neural networks on graphs,” 2017.
- T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” 2019.
- K. P. Schneider, B. A. Mather, B. C. Pal, C.-W. Ten, G. J. Shirek, H. Zhu, J. C. Fuller, J. L. R. Pereira, L. F. Ochoa, L. R. de Araujo, R. C. Dugan, S. Matthias, S. Paudyal, T. E. McDermott, and W. Kersting, “Analytic considerations and design basis for the ieee distribution test feeders,” IEEE Transactions on Power Systems, vol. 33, no. 3, pp. 3181–3188, 2018.
- D. Cao, J. Zhao, W. Hu, F. Ding, Q. Huang, Z. Chen, and F. Blaabjerg, “Model-free voltage regulation of unbalanced distribution network based on surrogate model and deep reinforcement learning,” 2020.
- D. Cao, J. Zhao, W. Hu, F. Ding, N. Yu, Q. Huang, and Z. Chen, “Model-free voltage control of active distribution system with pvs using surrogate model-based deep reinforcement learning,” Applied Energy, vol. 306, p. 117982, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S030626192101285X
- I. U. Khan, N. Javaid, K. A. Gamage, C. J. Taylor, S. Baig, and X. Ma, “Heuristic algorithm based optimal power flow model incorporating stochastic renewable energy sources,” IEEE Access, vol. 8, pp. 148 622–148 643, 2020.
- T. Dokeroglu, A. Deniz, and H. E. Kiziloz, “A robust multiobjective harris’ hawks optimization algorithm for the binary classification problem,” Knowledge-Based Systems, vol. 227, p. 107219, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950705121004810
- P. Hao and B. Sobhani, “Application of the improved chaotic grey wolf optimization algorithm as a novel and efficient method for parameter estimation of solid oxide fuel cells model,” International Journal of Hydrogen Energy, vol. 46, no. 73, pp. 36 454–36 465, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0360319921034194
- F. Capitanescu, M. Glavic, D. Ernst, and L. Wehenkel, “Interior-point based algorithms for the solution of optimal power flow problems,” Electric Power Systems Research, vol. 77, no. 5, pp. 508–517, 2007. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378779606001209
- P. Fortenbacher and T. Demiray, “Linear/quadratic programming-based optimal power flow using linear power flow and absolute loss approximations,” International Journal of Electrical Power & Energy Systems, vol. 107, pp. 680–689, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0142061518325377
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” 2017.
- T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 10–15 Jul 2018, pp. 1861–1870.
- Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas, “Sample efficient actor-critic with experience replay,” 2017.
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in Proceedings of The 33rd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 48. PMLR, 20–22 Jun 2016, pp. 1928–1937.
- Jinhao Li (21 papers)
- Ruichang Zhang (5 papers)
- Hao Wang (1124 papers)
- Zhi Liu (155 papers)
- Hongyang Lai (2 papers)
- Yanru Zhang (23 papers)