Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distal Explanations for Model-free Explainable Reinforcement Learning (2001.10284v2)

Published 28 Jan 2020 in cs.AI, cs.HC, and cs.LG

Abstract: In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for why' andwhy not' questions. Our starting point is the observation that causal models can generate opportunity chains that take the form of `A enables B and B causes C'. Using insights from an analysis of 240 explanations generated in a human-agent experiment, we define a distal explanation model that can analyse counterfactuals and opportunity chains using decision trees and causal models. A recurrent neural network is employed to learn opportunity chains, and decision trees are used to improve the accuracy of task prediction and the generated counterfactuals. We computationally evaluate the model in 6 reinforcement learning benchmarks using different reinforcement learning algorithms. From a study with 90 human participants, we show that our distal explanation model results in improved outcomes over three scenarios compared with two baseline explanation models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Prashan Madumal (7 papers)
  2. Tim Miller (53 papers)
  3. Liz Sonenberg (16 papers)
  4. Frank Vetere (7 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com