A unified uncertainty-aware exploration: Combining epistemic and aleatory uncertainty (2401.02914v1)
Abstract: Exploration is a significant challenge in practical reinforcement learning (RL), and uncertainty-aware exploration that incorporates the quantification of epistemic and aleatory uncertainty has been recognized as an effective exploration strategy. However, capturing the combined effect of aleatory and epistemic uncertainty for decision-making is difficult. Existing works estimate aleatory and epistemic uncertainty separately and consider the composite uncertainty as an additive combination of the two. Nevertheless, the additive formulation leads to excessive risk-taking behavior, causing instability. In this paper, we propose an algorithm that clarifies the theoretical connection between aleatory and epistemic uncertainty, unifies aleatory and epistemic uncertainty estimation, and quantifies the combined effect of both uncertainties for a risk-sensitive exploration. Our method builds on a novel extension of distributional RL that estimates a parameterized return distribution whose parameters are random variables encoding epistemic uncertainty. Experimental results on tasks with exploration and risk challenges show that our method outperforms alternative approaches.
- “Exploration in deep reinforcement learning: a comprehensive survey,” arXiv preprint arXiv:2109.06668, 2021.
- “A review of uncertainty for deep reinforcement learning,” arXiv preprint arXiv:2208.09052, 2022.
- “Efficient and stable information directed exploration for continuous reinforcement learning,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 4023–4027.
- “Efficient exploration with double uncertain value networks,” arXiv preprint arXiv:1711.10789, 2017.
- “Robust and efficient uncertainty aware biosignal classification via early exit ensembles,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 3998–4002.
- “Integrating statistical uncertainty into neural network-based speech enhancement,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 386–390.
- “Distributional reinforcement learning with quantile regression,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32.
- “A distributional code for value in dopamine-based reinforcement learning,” Nature, vol. 577, no. 7792, pp. 671–675, 2020.
- “Sunrise: A simple unified framework for ensemble learning in deep reinforcement learning,” in International Conference on Machine Learning. PMLR, 2021, pp. 6131–6141.
- “Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble,” in Conference on Robot Learning. PMLR, 2022, pp. 1702–1712.
- “Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning,” arXiv preprint arXiv:2202.11566, 2022.
- “Dropout q-functions for doubly efficient reinforcement learning,” in International Conference on Learning Representations, 2022.
- “Uncertainty weighted actor-critic for offline reinforcement learning,” arXiv preprint arXiv:2105.08140, 2021.
- “Robust model-free reinforcement learning with multi-objective bayesian optimization,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 10702–10708.
- “Mm-ktd: multiple model kalman temporal differences for reinforcement learning,” IEEE Access, vol. 8, pp. 128716–128729, 2020.
- “Distributed hybrid kalman temporal differences for reinforcement learning,” in 2020 54th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2020, pp. 579–583.
- “Exploration by distributional reinforcement learning,” arXiv preprint arXiv:1805.01907, 2018.
- “Being optimistic to be conservative: Quickly learning a cvar policy,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 4436–4443.
- “Robust reinforcement learning with distributional risk-averse formulation,” arXiv preprint arXiv:2206.06841, 2022.
- “How to stay curious while avoiding noisy tvs using aleatoric uncertainty estimation,” in International Conference on Machine Learning. PMLR, 2022, pp. 15220–15240.
- “Distributional reinforcement learning for efficient exploration,” in International conference on machine learning. PMLR, 2019, pp. 4424–4434.
- “Sentinel: taming uncertainty with ensemble based distributional reinforcement learning,” in Uncertainty in Artificial Intelligence. PMLR, 2022, pp. 631–640.
- “Estimating risk and uncertainty in deep reinforcement learning,” arXiv preprint arXiv:1905.09638, 2019.
- “Sample efficient deep reinforcement learning via uncertainty estimation,” arXiv preprint arXiv:2201.01666, 2022.
- “Information-directed exploration for deep reinforcement learning,” in International Conference on Learning Representations, 2019.
- “A distributional perspective on reinforcement learning,” in International Conference on Machine Learning. PMLR, 2017, pp. 449–458.
- “Distributional deep reinforcement learning with a mixture of gaussians,” in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 9791–9797.
- Statistics: principles and methods, John Wiley & Sons, 2019.
- E. Leurent, “An environment for autonomous driving decision-making,” GitHub, 2018.