One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay (1711.10137v2)
Abstract: Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning.
- Jake Bruce (13 papers)
- Niko Suenderhauf (17 papers)
- Piotr Mirowski (20 papers)
- Raia Hadsell (50 papers)
- Michael Milford (145 papers)