Go Beyond Black-box Policies: Rethinking the Design of Learning Agent for Interpretable and Verifiable HVAC Control (2403.00172v1)
Abstract: Recent research has shown the potential of Model-based Reinforcement Learning (MBRL) to enhance energy efficiency of Heating, Ventilation, and Air Conditioning (HVAC) systems. However, existing methods rely on black-box thermal dynamics models and stochastic optimizers, lacking reliability guarantees and posing risks to occupant health. In this work, we overcome the reliability bottleneck by redesigning HVAC controllers using decision trees extracted from existing thermal dynamics models and historical data. Our decision tree-based policies are deterministic, verifiable, interpretable, and more energy-efficient than current MBRL methods. First, we introduce a novel verification criterion for RL agents in HVAC control based on domain knowledge. Second, we develop a policy extraction procedure that produces a verifiable decision tree policy. We found that the high dimensionality of the thermal dynamics model input hinders the efficiency of policy extraction. To tackle the dimensionality challenge, we leverage importance sampling conditioned on historical data distributions, significantly improving policy extraction efficiency. Lastly, we present an offline verification algorithm that guarantees the reliability of a control policy. Extensive experiments show that our method saves 68.4% more energy and increases human comfort gain by 14.8% compared to the state-of-the-art method, in addition to an 1127x reduction in computation overhead. Our code and data are available at https://github.com/ryeii/Veri_HVAC
- Zhiyu An et al. 2023. CLUE: Safe Model-Based RL HVAC Control Using Epistemic Uncertainty Estimation. In ACM BuildSys.
- Zhiyu An et al. 2024. Reward Bound for Behavioral Guarantee of Model-based Planning Agents. arXiv preprint arXiv:2402.13419 (2024).
- Edoardo Bacci. 2022. Formal verification of deep reinforcement learning agents. Ph. D. Dissertation. University of Birmingham.
- Edoardo Bacci and David Parker. 2022. Verified probabilistic policies for deep reinforcement learning. In NASA Formal Methods Symposium. Springer, 193–212.
- Verifiable reinforcement learning via policy extraction. NIPS (2018).
- Lukas Brunke et al. 2022. Safe learning in robotics: From learning-based control to safe reinforcement learning. Annu. Rev. Control Robot. Auton. Syst. (2022).
- Bingqing Chen et al. 2019. Gnu-rl: A precocial reinforcement learning solution for building hvac control using a differentiable mpc policy. In ACM BuildSys.
- OCTOPUS: Deep reinforcement learning for holistic smart building control. In ACM BuildSys. 326–335.
- Mb2c: Model-based deep reinforcement learning for multi-zone building control. In ACM BuildSys. 50–59.
- DoE. 2010. EnergyPlus Input output reference. US Department of Energy (2010).
- U.S. DoE. [n. d.]. Buildings energy data book.
- Javier Jiménez-Raboso et al. 2021. Sinergym: a building simulation and control framework for training reinforcement learning agents. In ACM BuildSys. 319–323.
- Matthew Landers and Afsaneh Doryab. 2023. Deep Reinforcement Learning Verification: A Survey. Comput. Surveys (2023).
- Bo Li et al. 2023. Trustworthy AI: From principles to practices. Comput. Surveys (2023).
- Wei-Yin Loh. 2011. Classification and regression trees. Wiley interdisciplinary reviews: data mining and knowledge discovery 1, 1 (2011), 14–23.
- Frank Nielsen. 2019. On the Jensen–Shannon symmetrization of distances relying on abstract means. Entropy 21, 5 (2019), 485.
- Challenges in deploying machine learning: a survey of case studies. Comput. Surveys (2022).
- ASHRAE STANDARD. 2020. ANSI/ASHRAE Addendum a to ANSI/ASHRAE Standard 169-2020. ASHRAE Standing Standard Project Committee (2020).
- Deep reinforcement learning for building HVAC control. In Proceedings of the 54th DAC.
- Chi Zhang et al. 2019. Building HVAC scheduling using reinforcement learning via neural network based model approximation. In ACM BuildSys. 287–296.
- Zhiyu An (6 papers)
- Xianzhong Ding (12 papers)
- Wan Du (21 papers)