Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system (1803.10371v1)

Published 28 Mar 2018 in cs.RO, cs.LG, and cs.SY

Abstract: Reinforcement learning has emerged as a promising methodology for training robot controllers. However, most results have been limited to simulation due to the need for a large number of samples and the lack of automated-yet-safe data collection methods. Model-based reinforcement learning methods provide an avenue to circumvent these challenges, but the traditional concern has been the mismatch between the simulator and the real world. Here, we show that control policies learned in simulation can successfully transfer to a physical system, composed of three Phantom robots pushing an object to various desired target positions. We use a modified form of the natural policy gradient algorithm for learning, applied to a carefully identified simulation model. The resulting policies, trained entirely in simulation, work well on the physical system without additional training. In addition, we show that training with an ensemble of models makes the learned policies more robust to modeling errors, thus compensating for difficulties in system identification.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kendall Lowrey (9 papers)
  2. Svetoslav Kolev (1 paper)
  3. Jeremy Dao (14 papers)
  4. Aravind Rajeswaran (42 papers)
  5. Emanuel Todorov (11 papers)
Citations (58)