Batch-Constrained Reinforcement Learning for Dynamic Distribution Network Reconfiguration (2006.12749v1)
Abstract: Dynamic distribution network reconfiguration (DNR) algorithms perform hourly status changes of remotely controllable switches to improve distribution system performance. The problem is typically solved by physical model-based control algorithms, which not only rely on accurate network parameters but also lack scalability. To address these limitations, this paper develops a data-driven batch-constrained reinforcement learning (RL) algorithm for the dynamic DNR problem. The proposed RL algorithm learns the network reconfiguration control policy from a finite historical operational dataset without interacting with the distribution network. The numerical study results on three distribution networks show that the proposed algorithm not only outperforms state-of-the-art RL algorithms but also improves the behavior control policy, which generated the historical operational data. The proposed algorithm is also very scalable and can find a desirable network reconfiguration solution in real-time.