Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays (2402.03141v2)

Published 5 Feb 2024 in cs.LG, cs.AI, cs.SY, and eess.SY

Abstract: Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel Auxiliary-Delayed Reinforcement Learning (AD-RL) method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qingyuan Wu (15 papers)
  2. Simon Sinong Zhan (8 papers)
  3. Yixuan Wang (95 papers)
  4. Chung-Wei Lin (7 papers)
  5. Chen Lv (84 papers)
  6. Qi Zhu (160 papers)
  7. Chao Huang (244 papers)
  8. Yuhui Wang (43 papers)
  9. Jürgen Schmidhuber (124 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.