Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Reinforcement Learning in 3D Non-stationary Environments (1905.10112v2)

Published 24 May 2019 in cs.LG, cs.CV, and stat.ML

Abstract: High-dimensional always-changing environments constitute a hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained off-line in very static and controlled conditions in simulation such that training observations can be thought as sampled i.i.d. from the entire observations space. However, in real world settings, the environment is often non-stationary and subject to unpredictable, frequent changes. In this paper we propose and openly release CRLMaze, a new benchmark for learning continually through reinforcement in a complex 3D non-stationary task based on ViZDoom and subject to several environmental changes. Then, we introduce an end-to-end model-free continual reinforcement learning strategy showing competitive results with respect to four different baselines and not requiring any access to additional supervised signals, previously encountered environmental conditions or observations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vincenzo Lomonaco (58 papers)
  2. Karan Desai (9 papers)
  3. Eugenio Culurciello (20 papers)
  4. Davide Maltoni (33 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.