Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-training of Deep RL Agents for Improved Learning under Domain Randomization (2104.14386v1)

Published 29 Apr 2021 in cs.LG, cs.AI, and cs.RO

Abstract: Visual domain randomization in simulated environments is a widely used method to transfer policies trained in simulation to real robots. However, domain randomization and augmentation hamper the training of a policy. As reinforcement learning struggles with a noisy training signal, this additional nuisance can drastically impede training. For difficult tasks it can even result in complete failure to learn. To overcome this problem we propose to pre-train a perception encoder that already provides an embedding invariant to the randomization. We demonstrate that this yields consistently improved results on a randomized version of DeepMind control suite tasks and a stacking environment on arbitrary backgrounds with zero-shot transfer to a physical robot.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Artemij Amiranashvili (8 papers)
  2. Max Argus (21 papers)
  3. Lukas Hermann (9 papers)
  4. Wolfram Burgard (149 papers)
  5. Thomas Brox (134 papers)
Citations (3)