Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks (2404.12520v1)
Abstract: The widespread adoption of electric vehicles (EVs) poses several challenges to power distribution networks and smart grid infrastructure due to the possibility of significantly increasing electricity demands, especially during peak hours. Furthermore, when EVs participate in demand-side management programs, charging expenses can be reduced by using optimal charging control policies that fully utilize real-time pricing schemes. However, devising optimal charging methods and control strategies for EVs is challenging due to various stochastic and uncertain environmental factors. Currently, most EV charging controllers operate based on a centralized model. In this paper, we introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework. Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community, where all EVs are connected to a shared transformer. This method, referred to as CTDE-DDPG, adopts a Centralized Training Decentralized Execution (CTDE) approach to establish cooperation between agents during the training phase, while ensuring a distributed and privacy-preserving operation during execution. We theoretically examine the performance of centralized and decentralized critics for the DDPG-based MARL implementation and demonstrate their trade-offs. Furthermore, we numerically explore the efficiency, scalability, and performance of centralized and decentralized critics. Our theoretical and numerical results indicate that, despite higher policy gradient variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average...
- N. I. Nimalsiri, C. P. Mediwaththe, E. L. Ratnam, M. Shaw, D. B. Smith, and S. K. Halgamuge, “A survey of algorithms for distributed charging control of electric vehicles in smart grid,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 11, pp. 4497–4515, 2019.
- A. S. Al-Ogaili, T. J. T. Hashim, N. A. Rahmat, A. K. Ramasamy, M. B. Marsadek, M. Faisal, and M. A. Hannan, “Review on scheduling, clustering, and forecasting strategies for controlling electric vehicle charging: Challenges and recommendations,” Ieee Access, vol. 7, 2019.
- H. M. Abdullah, A. Gastli, and L. Ben-Brahim, “Reinforcement learning based EV charging management systems–a review,” IEEE Access, vol. 9, pp. 41 506–41 531, 2021.
- B. Sun, Z. Huang, X. Tan, and D. H. Tsang, “Optimal scheduling for electric vehicle charging with discrete charging levels in distribution grid,” IEEE Transactions on Smart Grid, vol. 9, no. 2, pp. 624–634, 2016.
- N. G. Paterakis, O. Erdinç, I. N. Pappi, A. G. Bakirtzis, and J. P. Catalão, “Coordinated operation of a neighborhood of smart households comprising electric vehicles, energy storage and distributed generation,” IEEE Transactions on smart grid, vol. 7, no. 6, pp. 2736–2747, 2016.
- M. A. Ortega-Vazquez, “Optimal scheduling of electric vehicle charging and vehicle-to-grid services at household level including battery degradation and price uncertainty,” IET Generation, Transmission & Distribution, vol. 8, no. 6, pp. 1007–1016, 2014.
- D. Wu, H. Zeng, C. Lu, and B. Boulet, “Two-stage energy management for office buildings with workplace EV charging and renewable energy,” IEEE Transactions on Transportation Electrification, vol. 3, no. 1, 2017.
- Y. Zheng, Y. Song, D. J. Hill, and K. Meng, “Online distributed MPC-based optimal scheduling for EV charging stations in distribution systems,” IEEE Transactions on Industrial Informatics, vol. 15, no. 2, 2018.
- Y. Xu, F. Pan, and L. Tong, “Dynamic scheduling for charging electric vehicles: A priority rule,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 4094–4099, 2016.
- Z. Wan, H. Li, H. He, and D. Prokhorov, “Model-free real-time EV charging scheduling based on deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5246–5257, 2018.
- Y. Zhang, X. Rao, C. Liu, X. Zhang, and Y. Zhou, “A cooperative EV charging scheduling strategy based on double deep Q-network and prioritized experience replay,” Engineering Applications of Artificial Intelligence, vol. 118, p. 105642, 2023.
- A. Chiş, J. Lundén, and V. Koivunen, “Reinforcement learning-based plug-in electric vehicle charging with forecasted price,” IEEE Transactions on Vehicular Technology, vol. 66, no. 5, pp. 3674–3684, 2016.
- J. Jin and Y. Xu, “Shortest-path-based deep reinforcement learning for EV charging routing under stochastic traffic condition and electricity prices,” IEEE Internet of Things Journal, vol. 9, no. 22, pp. 22 571–22 581, 2022.
- Y. Cao, H. Wang, D. Li, and G. Zhang, “Smart online charging algorithm for electric vehicles via customized actor–critic learning,” IEEE Internet of Things Journal, vol. 9, no. 1, pp. 684–694, 2022.
- F. Zhang, Q. Yang, and D. An, “CDDPG: A deep-reinforcement-learning-based approach for electric vehicle charging control,” IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3075–3087, 2020.
- J. Jin and Y. Xu, “Optimal policy characterization enhanced actor-critic approach for electric vehicle charging scheduling in a power distribution network,” IEEE Transactions on Smart Grid, pp. 1416–1428, 2021.
- R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, 2017.
- X. Lyu, Y. Xiao, B. Daley, and C. Amato, “Contrasting centralized and decentralized critics in multi-agent reinforcement learning,” arXiv preprint arXiv:2102.04402, 2021.
- J. G. Kuba, M. Wen, L. Meng, H. Zhang, D. Mguni, J. Wang, Y. Yang et al., “Settling the variance of multi-agent policy gradients,” Advances in Neural Information Processing Systems, vol. 34, pp. 13 458–13 470, 2021.
- A. Shojaeighadikolaei and M. Hashemi, “An efficient distributed multi-agent reinforcement learning for EV charging network control,” in 2023 59th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2023, pp. 1–8.
- D. Said and H. T. Mouftah, “A novel electric vehicles charging/discharging management protocol based on queuing model,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 1, pp. 100–111, 2020.
- J. Wang, G. R. Bharati, S. Paudyal, O. Ceylan, B. P. Bhattarai, and K. S. Myers, “Coordinated electric vehicle charging with reactive power support to distribution grids,” IEEE Transactions on Industrial Informatics, vol. 15, no. 1, pp. 54–63, 2019.
- C. B. Saner, A. Trivedi, and D. Srinivasan, “A cooperative hierarchical multi-agent system for EV charging scheduling in presence of multiple charging stations,” IEEE Transactions on Smart Grid, vol. 13, no. 3, 2022.
- L. Tao and Y. Gao, “Real-time pricing for smart grid with distributed energy and storage: A noncooperative game method considering spatially and temporally coupled constraints,” International Journal of Electrical Power & Energy Systems, vol. 115, p. 105487, 2020.
- T. Qian, C. Shao, X. Li, X. Wang, Z. Chen, and M. Shahidehpour, “Multi-agent deep reinforcement learning method for EV charging station game,” IEEE Transactions on Power Systems, vol. 37, no. 3, pp. 1682–1694, 2022.
- Y. Lu, Y. Liang, Z. Ding, Q. Wu, T. Ding, and W.-J. Lee, “Deep reinforcement learning-based charging pricing for autonomous mobility-on-demand system,” IEEE Transactions on Smart Grid, vol. 13, no. 2, pp. 1412–1426, 2022.
- S. Li, W. Hu, D. Cao, Z. Zhang, Q. Huang, Z. Chen, and F. Blaabjerg, “A multiagent deep reinforcement learning based approach for the optimization of transformer life using coordinated electric vehicles,” IEEE Transactions on Industrial Informatics, vol. 18, pp. 7639–7652, 2022.
- Y. Wang, D. Qiu, G. Strbac, and Z. Gao, “Coordinated electric vehicle active and reactive power control for active distribution networks,” IEEE Transactions on Industrial Informatics, vol. 19, pp. 1611–1622, 2023.
- Y. Chu, Z. Wei, X. Fang, S. Chen, and Y. Zhou, “A multiagent federated reinforcement learning approach for plug-in electric vehicle fleet charging coordination in a residential community,” IEEE Access, vol. 10, pp. 98 535–98 548, 2022.
- Z. Zhang, Y. Jiang, Y. Shi, Y. Shi, and W. Chen, “Federated reinforcement learning for real-time electric vehicle charging and discharging control,” in 2022 IEEE Globecom Workshops (GC Wkshps). IEEE, 2022.
- J. Qian, Y. Jiang, X. Liu, Q. Wang, T. Wang, Y. Shi, and W. Chen, “Federated reinforcement learning for electric vehicles charging control on distribution networks,” IEEE Internet of Things Journal, vol. 11, no. 3, pp. 5511–5525, 2024.
- L. Yan, X. Chen, Y. Chen, and J. Wen, “A cooperative charging control strategy for electric vehicles based on multiagent deep reinforcement learning,” IEEE Transactions on Industrial Informatics, vol. 18, no. 12, pp. 8765–8775, 2022.
- A.-H. Mohsenian-Rad, V. W. Wong, J. Jatskevich, R. Schober, and A. Leon-Garcia, “Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid,” IEEE transactions on Smart Grid, vol. 1, no. 3, pp. 320–331, 2010.
- R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” Advances in neural information processing systems, vol. 12, 1999.
- D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in International conference on machine learning. Pmlr, 2014, pp. 387–395.
- R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, pp. 229–256, 1992.
- L. Weaver and N. Tao, “The optimal reward baseline for gradient-based reinforcement learning,” arXiv preprint arXiv:1301.2315, 2013.
- V. Konda and J. Tsitsiklis, “Actor-critic algorithms,” Advances in neural information processing systems, vol. 12, 1999.
- X. Lyu, A. Baisero, Y. Xiao, B. Daley, and C. Amato, “On centralized critics in multi-agent reinforcement learning,” Journal of Artificial Intelligence Research, vol. 77, pp. 295–354, 2023.
- J. Avigad, E. T. Dean, and J. Rute, “Algorithmic randomness, reverse mathematics, and the dominated convergence theorem,” Annals of Pure and Applied Logic, vol. 163, no. 12, pp. 1854–1864, 2012.
- L. Wang, Z. Zhu, C. Jiang, and Z. Li, “Bi-level robust optimization for distribution system with multiple microgrids considering uncertainty distribution locational marginal price,” IEEE Transactions on Smart Grid, vol. 12, no. 2, pp. 1104–1117, 2021.
- Amin Shojaeighadikolaei (6 papers)
- Zsolt Talata (2 papers)
- Morteza Hashemi (32 papers)