2000 character limit reached
Accelerating the Learning of TAMER with Counterfactual Explanations (2108.01358v2)
Published 3 Aug 2021 in cs.AI and cs.LG
Abstract: The capability to interactively learn from human feedback would enable agents in new settings. For example, even novice users could train service robots in new tasks naturally and interactively. Human-in-the-loop Reinforcement Learning (HRL) combines human feedback and Reinforcement Learning (RL) techniques. State-of-the-art interactive learning techniques suffer from slow learning speed, thus leading to a frustrating experience for the human. We approach this problem by extending the HRL framework TAMER for evaluative feedback with the possibility to enhance human feedback with two different types of counterfactual explanations (action and state based). We experimentally show that our extensions improve the speed of learning.
- Jakob Karalus (2 papers)
- Felix Lindner (24 papers)