Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reactive Planning in Dynamic Environments (2011.00155v2)

Published 31 Oct 2020 in cs.RO, cs.AI, and cs.LG

Abstract: The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution. While goal conditioning of policies has been studied in the RL literature, such approaches are not easily extended to cases where the robot's goal can change during execution. This is something that humans are naturally able to do. However, it is difficult for robots to learn such reflexes (i.e., to naturally respond to dynamic environments), especially when the goal location is not explicitly provided to the robot, and instead needs to be perceived through a vision sensor. In the current work, we present a method that can achieve such behavior by combining traditional kinematic planning, deep learning, and deep reinforcement learning in a synergistic fashion to generalize to arbitrary environments. We demonstrate the proposed approach for several reaching and pick-and-place tasks in simulation, as well as on a real system of a 6-DoF industrial manipulator. A video describing our work could be found \url{https://youtu.be/hE-Ew59GRPQ}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kei Ota (17 papers)
  2. Devesh K. Jha (46 papers)
  3. Tadashi Onishi (1 paper)
  4. Asako Kanezaki (25 papers)
  5. Yusuke Yoshiyasu (13 papers)
  6. Yoko Sasaki (10 papers)
  7. Toshisada Mariyama (4 papers)
  8. Daniel Nikovski (27 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.