Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hallucinating Value: A Pitfall of Dyna-style Planning with Imperfect Environment Models (2006.04363v1)

Published 8 Jun 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Dyna-style reinforcement learning (RL) agents improve sample efficiency over model-free RL agents by updating the value function with simulated experience generated by an environment model. However, it is often difficult to learn accurate models of environment dynamics, and even small errors may result in failure of Dyna agents. In this paper, we investigate one type of model error: hallucinated states. These are states generated by the model, but that are not real states of the environment. We present the Hallucinated Value Hypothesis (HVH): updating values of real states towards values of hallucinated states results in misleading state-action values which adversely affect the control policy. We discuss and evaluate four Dyna variants; three which update real states toward simulated -- and therefore potentially hallucinated -- states and one which does not. The experimental results provide evidence for the HVH thus suggesting a fruitful direction toward developing Dyna algorithms robust to model error.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Taher Jafferjee (7 papers)
  2. Ehsan Imani (9 papers)
  3. Erin Talvitie (1 paper)
  4. Martha White (89 papers)
  5. Micheal Bowling (1 paper)
Citations (27)