Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Goal-Directed Object Pushing in Cluttered Scenes with Location-Based Attention (2403.17667v1)

Published 26 Mar 2024 in cs.RO

Abstract: Non-prehensile planar pushing is a challenging task due to its underactuated nature with hybrid-dynamics, where a robot needs to reason about an object's long-term behaviour and contact-switching, while being robust to contact uncertainty. The presence of clutter in the environment further complicates this task, introducing the need to include more sophisticated spatial analysis to avoid collisions. Building upon prior work on reinforcement learning (RL) with multimodal categorical exploration for planar pushing, in this paper we incorporate location-based attention to enable robust navigation through clutter. Unlike previous RL literature addressing this obstacle avoidance pushing task, our framework requires no predefined global paths and considers the target orientation of the manipulated object. Our results demonstrate that the learned policies successfully navigate through a wide range of complex obstacle configurations, including dynamic obstacles, with smooth motions, achieving the desired target object pose. We also validate the transferability of the learned policies to robotic hardware using the KUKA iiwa robot arm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Jochen Stüber, Claudio Zito and Rustam Stolkin “Let’s push things forward: A survey on robot pushing” In Frontiers in Robotics and AI Frontiers, 2020
  2. Ziyan Gao, Armagan Elibol and Nak Young Chong “A few-shot learning framework for planar pushing of unknown objects” In Intelligent Service Robotics Springer, 2022
  3. “Learning to grasp the ungraspable with emergent extrinsic dexterity” In Conference on Robot Learning, 2023, pp. 150–160 PMLR
  4. M.T. Mason “Mechanics and Planning of Manipulator Pushing Operations” In The International Journal of Robotics Research 5.3, 1986, pp. 53–71 DOI: 10.1177/027836498600500303
  5. Francois R Hogan and Alberto Rodriguez “Reactive planar non-prehensile manipulation with hybrid model predictive control” In The International Journal of Robotics Research, 2020
  6. “A probabilistic data-driven model for planar pushing” In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2017, pp. 3008–3015 DOI: 10.1109/ICRA.2017.7989345
  7. João Moura, Theodoros Stouraitis and Sethu Vijayakumar “Non-prehensile planar manipulation via trajectory optimization with complementarity constraints” In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022 IEEE
  8. “Learning adaptive reaching and pushing skills using contact information” In Frontiers in Neurorobotics, 2023
  9. “Precise Object Sliding with Top Contact via Asymmetric Dual Limit Surfaces” In arXiv preprint arXiv:2305.14289, 2023
  10. Maria Bauza, Francois R Hogan and Alberto Rodriguez “A data-efficient approach to precise and controlled pushing” In Conference on Robot Learning, 2018 PMLR
  11. “Efficient push-grasping for multiple target objects in clutter environments” In Frontiers in Neurorobotics Frontiers, 2023
  12. Wissam Bejjani, Matteo Leonetti and Mehmet R Dogar “Learning image-based receding horizon planning for manipulation in clutter” In Journal on Robotics and Autonomous Systems (RAS) Elsevier, 2021 DOI: https://doi.org/10.1016/j.robot.2021.103730
  13. Juan Del Aguila Ferrandis, João Moura and Sethu Vijayakumar “Nonprehensile planar manipulation through reinforcement learning with multimodal categorical exploration” In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023 DOI: 10.1109/IROS55552.2023.10341629
  14. Nils Dengler, David Großklaus and Maren Bennewitz “Learning Goal-Oriented Non-Prehensile Pushing in Cluttered Scenes” In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2022 IEEE DOI: 10.1109/IROS47612.2022.9981873
  15. Minh-Thang Luong, Hieu Pham and Christopher D Manning “Effective approaches to attention-based neural machine translation” In arXiv preprint arXiv:1508.04025, 2015
  16. “PokeRRT: Poking as a Skill and Failure Recovery Tactic for Planar Non-Prehensile Manipulation” In IEEE Robotics and Automation Letters (RA-L) IEEE, 2022 DOI: 10.1109/LRA.2022.3148442
  17. “Guided optimal control for long-term non-prehensile planar manipulation” In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023 IEEE DOI: 10.1109/ICRA48891.2023.10161496
  18. “Reinforcement Learning With Vision-Proprioception Model for Robot Planar Pushing” In Frontiers in Neurorobotics 16, 2022 DOI: 10.3389/fnbot.2022.829437
  19. “Synergistic Task and Motion Planning With Reinforcement Learning-Based Non-Prehensile Actions” In IEEE Robotics and Automation Letters (RA-L) IEEE, 2023 DOI: 10.1109/LRA.2023.3261708
  20. Yongpeng Jiang, Yongyi Jia and Xiang Li “Contact-Aware Non-Prehensile Manipulation for Object Retrieval in Cluttered Environments” In Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2023 IEEE DOI: 10.1109/IROS55552.2023.10341476
  21. “Ariadne: A reinforcement learning approach using attention-based deep networks for exploration” In arXiv preprint arXiv:2301.11575, 2023
  22. “Spatiotemporal Attention Enhances Lidar-Based Robot Navigation in Dynamic Environments”, 2023 arXiv:2310.19670 [cs.RO]
  23. “Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions” In IEEE Robotics and Automation Letters (RA-L), 2023 DOI: 10.1109/LRA.2023.3243526
  24. “Safe Multi-Agent Reinforcement Learning for Formation Control without Individual Reference Targets” In arXiv preprint arXiv:2312.12861, 2023
  25. “Reinforcement Learning with Attention that Works: A Self-Supervised Approach” In International Conference on Neural Information Processing, 2019
  26. Zhaoyang Niu, Guoqiang Zhong and Hui Yu “A review on the attention mechanism of deep learning” In Neurocomputing, 2021
  27. “Pytorch: An imperative style, high-performance deep learning library” In Advances in neural information processing systems, 2019
  28. “Proximal policy optimization algorithms” In arXiv preprint arXiv:1707.06347, 2017
  29. “Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments” In IEEE Robotics and Automation Letters (RA-L), 2023 DOI: 10.1109/LRA.2023.3270034
  30. “Learning to walk in minutes using massively parallel deep reinforcement learning” In Conference on Robot Learning, 2022
  31. “Time limits in reinforcement learning” In International Conference on Machine Learning, 2018, pp. 4045–4054 PMLR
  32. “PyBullet, a Python module for physics simulation for games, robotics and machine learning”, http://pybullet.org, 2016–2021
  33. “OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control” In IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 9118–9124 DOI: 10.1109/ICRA48891.2023.10161272
  34. “ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and Human-Robot Interaction” In Proceedings of The 6th Conference on Robot Learning PMLR, 2023
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nils Dengler (17 papers)
  2. Juan Del Aguila Ferrandis (3 papers)
  3. João Moura (16 papers)
  4. Sethu Vijayakumar (65 papers)
  5. Maren Bennewitz (58 papers)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com