Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning (2207.10295v1)

Published 21 Jul 2022 in cs.LG, cs.AI, and cs.RO

Abstract: Impressive results in NLP based on the Transformer neural network architecture have inspired researchers to explore viewing offline reinforcement learning (RL) as a generic sequence modeling problem. Recent works based on this paradigm have achieved state-of-the-art results in several of the mostly deterministic offline Atari and D4RL benchmarks. However, because these methods jointly model the states and actions as a single sequencing problem, they struggle to disentangle the effects of the policy and world dynamics on the return. Thus, in adversarial or stochastic environments, these methods lead to overly optimistic behavior that can be dangerous in safety-critical systems like autonomous driving. In this work, we propose a method that addresses this optimism bias by explicitly disentangling the policy and world models, which allows us at test time to search for policies that are robust to multiple possible futures in the environment. We demonstrate our method's superior performance on a variety of autonomous driving tasks in simulation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Adam Villaflor (7 papers)
  2. Zhe Huang (57 papers)
  3. Swapnil Pande (1 paper)
  4. John Dolan (14 papers)
  5. Jeff Schneider (99 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.