Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Peer-to-Peer Energy Trading of Solar and Energy Storage: A Networked Multiagent Reinforcement Learning Approach (2401.13947v3)

Published 25 Jan 2024 in eess.SY, cs.LG, cs.MA, and cs.SY

Abstract: Utilizing distributed renewable and energy storage resources in local distribution networks via peer-to-peer (P2P) energy trading has long been touted as a solution to improve energy systems' resilience and sustainability. Consumers and prosumers (those who have energy generation resources), however, do not have the expertise to engage in repeated P2P trading, and the zero-marginal costs of renewables present challenges in determining fair market prices. To address these issues, we propose multi-agent reinforcement learning (MARL) frameworks to help automate consumers' bidding and management of their solar PV and energy storage resources, under a specific P2P clearing mechanism that utilizes the so-called supply-demand ratio. In addition, we show how the MARL frameworks can integrate physical network constraints to realize voltage control, hence ensuring physical feasibility of the P2P energy trading and paving way for real-world implementations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. K. Zhang, Z. Yang, and T. Basar, “Networked multi-agent reinforcement learning in continuous spaces,” in 2018 IEEE conference on decision and control (CDC), pp. 2771–2776, IEEE, 2018.
  2. C. Feng, A. L. Lu, and Y. Chen, “Decentralized voltage control with peer-to-peer energy trading in a distribution network,” in Proceedings of the 56th Hawaii International Conference on System Sciences, pp. 2600 – 2609, 2023.
  3. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  4. R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” Advances in neural information processing systems, vol. 30, 2017.
  5. N. Liu, X. Yu, C. Wang, C. Li, L. Ma, and J. Lei, “Energy-sharing model with price-based demand response for microgrids of peer-to-peer prosumers,” IEEE Transactions on Power Systems, vol. 32, no. 5, pp. 3569–3583, 2017.
  6. T. Morstyn and M. D. McCulloch, “Multiclass energy management for peer-to-peer energy trading driven by prosumer preferences,” IEEE Transactions on Power Systems, vol. 34, no. 5, pp. 4005–4014, 2018.
  7. M. H. Ullah and J.-D. Park, “Peer-to-peer energy trading in transactive markets considering physical network constraints,” IEEE Transactions on Smart Grid, vol. 12, no. 4, pp. 3390–3403, 2021.
  8. Y. Liu, C. Sun, A. Paudel, Y. Gao, Y. Li, H. B. Gooi, and J. Zhu, “Fully decentralized P2P energy trading in active distribution networks with voltage regulation,” IEEE Transactions on Smart Grid, vol. 14, no. 2, pp. 1466–1481, 2022.
  9. Y. Zhang, E. Dall’Anese, and M. Hong, “Online proximal-admm for time-varying constrained convex optimization,” IEEE Transactions on Signal and Information Processing over Networks, vol. 7, pp. 144–155, 2021.
  10. W. Tushar, T. K. Saha, C. Yuen, D. Smith, and H. V. Poor, “Peer-to-peer trading in electricity networks: An overview,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3185 – 3200, 2020.
  11. W. Tushar, C. Yuen, T. K. Saha, T. Morstyn, A. C. Chapman, M. J. E. Alam, S. Hanif, and H. V. Poor, “Peer-to-peer energy systems for connected communities: A review of recent advances and emerging challenges,” Applied energy, vol. 282, p. 116131, 2021.
  12. W. Tushar, C. Yuen, H. Mohsenian-Rad, T. Saha, H. V. Poor, and K. L. Wood, “Transforming energy networks via peer-to-peer energy trading: The potential of game-theoretic approaches,” IEEE Signal Processing Magazine, vol. 35, no. 4, pp. 90–111, 2018.
  13. E. A. Soto, L. B. Bosman, E. Wollega, and W. D. Leon-Salas, “Peer-to-peer energy trading: A review of the literature,” Applied Energy, vol. 283, p. 116268, 2021.
  14. J. Guerrero, A. C. Chapman, and G. Verbič, “Decentralized P2P energy trading under network constraints in a low-voltage network,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5163–5173, 2018.
  15. D. Biagioni, X. Zhang, D. Wald, D. Vaidhynathan, R. Chintala, J. King, and A. S. Zamzam, “Powergridworld: A framework for multi-agent reinforcement learning in power systems,” arXiv preprint arXiv:2111.05969, 2021.
  16. S. Wang, J. Duan, D. Shi, C. Xu, H. Li, R. Diao, and Z. Wang, “A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4644–4654, 2020.
  17. D. Qiu, J. Wang, J. Wang, and G. Strbac, “Multi-agent reinforcement learning for automated peer-to-peer energy trading in double-side auction market.,” in IJCAI, pp. 2913–2920, 2021.
  18. K. Zhang, Z. Yang, H. Liu, T. Zhang, and T. Basar, “Fully decentralized multi-agent reinforcement learning with networked agents,” in International Conference on Machine Learning, pp. 5872–5881, PMLR, 2018.
  19. G. Qu, A. Wierman, and N. Li, “Scalable reinforcement learning for multiagent networked systems,” Operations Research, vol. 70, no. 6, pp. 3601–3628, 2022.
  20. Z. Zhao, C. Feng, and A. L. Liu, “Comparisons of auction designs through multi-agent learning in peer-to-peer energy trading,” IEEE Transactions on Smart Grid, 2022. Online first.
  21. M. Figura, K. C. Kosaraju, and V. Gupta, “Adversarial attacks in consensus-based multi-agent reinforcement learning,” in 2021 American Control Conference (ACC), pp. 3050–3055, IEEE, 2021.
  22. P. Huang, A. Scheller-Wolf, and K. Sycara, “Design of a multi–unit double auction e–market,” Computational Intelligence, vol. 18, no. 4, pp. 596–617, 2002.
  23. B. Cui and X. A. Sun, “Solvability of power flow equations through existence and uniqueness of complex fixed point.” arXiv preprint arXiv:1904.08855, 2019.
  24. G. O. Roberts and J. S. Rosenthal, “General state space markov chains and MCMC algorithms,” Probability Surveys, vol. 1, pp. 20–71, 2004.
  25. IEEE Distribution System Analysis Subcommittee, “IEEE 13 Node Test Feeder.” https://cmte.ieee.org/pes-testfeeders/resources/.
  26. Electric Power Research Institute, “OpenDSS.” https://www.epri.com/pages/sa/opendss.
  27. The Ray Team, “Models, preprocessors, and action distributions Ray 2.8.1.” https://docs.ray.io/en/latest/rllib/rllib-models.html#default-model-config-settings.
  28. X. Guo, A. Hu, R. Xu, and J. Zhang, “Learning mean-field games,” Advances in neural information processing systems, vol. 32, 2019.

Summary

We haven't generated a summary for this paper yet.