Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing Agent Behavior over Long Time Scales by Transporting Value (1810.06721v2)

Published 15 Oct 2018 in cs.AI and cs.LG

Abstract: Humans spend a remarkable fraction of waking life engaged in acts of "mental time travel". We dwell on our actions in the past and experience satisfaction or regret. More than merely autobiographical storytelling, we use these event recollections to change how we will act in similar scenarios in the future. This process endows us with a computationally important ability to link actions and consequences across long spans of time, which figures prominently in addressing the problem of long-term temporal credit assignment; in AI this is the question of how to evaluate the utility of the actions within a long-duration behavioral sequence leading to success or failure in a task. Existing approaches to shorter-term credit assignment in AI cannot solve tasks with long delays between actions and consequences. Here, we introduce a new paradigm for reinforcement learning where agents use recall of specific memories to credit actions from the past, allowing them to solve problems that are intractable for existing algorithms. This paradigm broadens the scope of problems that can be investigated in AI and offers a mechanistic account of behaviors that may inspire computational models in neuroscience, psychology, and behavioral economics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chia-Chun Hung (5 papers)
  2. Timothy Lillicrap (60 papers)
  3. Josh Abramson (12 papers)
  4. Yan Wu (109 papers)
  5. Mehdi Mirza (18 papers)
  6. Federico Carnevale (7 papers)
  7. Arun Ahuja (24 papers)
  8. Greg Wayne (33 papers)
Citations (118)

Summary

We haven't generated a summary for this paper yet.