Papers
Topics
Authors
Recent
Search
2000 character limit reached

SafePicking: Learning Safe Object Extraction via Object-Level Mapping

Published 11 Feb 2022 in cs.RO, cs.AI, cs.CV, and cs.LG | (2202.05832v2)

Abstract: Robots need object-level scene understanding to manipulate objects while reasoning about contact, support, and occlusion among objects. Given a pile of objects, object recognition and reconstruction can identify the boundary of object instances, giving important cues as to how the objects form and support the pile. In this work, we present a system, SafePicking, that integrates object-level mapping and learning-based motion planning to generate a motion that safely extracts occluded target objects from a pile. Planning is done by learning a deep Q-network that receives observations of predicted poses and a depth-based heightmap to output a motion trajectory, trained to maximize a safety metric reward. Our results show that the observation fusion of poses and depth-sensing gives both better performance and robustness to the model. We evaluate our methods using the YCB objects in both simulation and the real world, achieving safe object extraction from piles.

Citations (11)

Summary

  • The paper introduces SafePicking, a novel robotic system for safe object extraction from cluttered piles, combining object-level mapping with a learning-based motion planner.
  • Experimental results demonstrate SafePicking's superiority over baselines, significantly reducing unwanted interactions with non-target objects in simulation and real-world scenarios.
  • SafePicking's method has potential implications for enhancing automation safety and efficiency in various applications like logistics and warehousing.

SafePicking: A Novel Approach to Learning Safe Object Extraction via Object-Level Mapping

The paper introduces SafePicking, an innovative system designed to tackle the challenges of robotic manipulation in cluttered environments, specifically focusing on safely extracting an occluded target object from a pile. By leveraging a combination of object-level mapping and learning-based motion planning, the authors aim to provide solutions for tasks where traditional methods struggle, such as in logistics and domestic settings.

Overview

SafePicking integrates two principal components: object-level mapping and learning-based motion planning. The object-level mapping utilizes volumetric reconstruction and pose estimation to create a detailed map of the objects in a scene. The system then uses this information to plan motions through a Deep Q-Network (DQN), which predicts safe trajectories based on observations, including predicted poses and depth data in the form of a heightmap.

Key Contributions

  1. Safe Object Extraction Task: The paper introduces "safe object extraction" as a distinct manipulation task. The focus is on minimizing the disruption to non-target objects during the extraction process.
  2. Fusion of Raw and Pose Observations: By combining pose information and depth-based observations, SafePicking achieves high extraction performance and robustness even when errors in pose estimation occur.
  3. Integrated System: The authors demonstrate a comprehensive robotic manipulation system capable of executing safe object extraction tasks in real-world scenarios and simulations, showcasing effectiveness with YCB objects.

Experimental Evaluation

SafePicking was evaluated against several baselines, including heuristic approaches and traditional collision-based planners like RRT-Connect. The results showed that SafePicking consistently outperformed these methods in both simulation and real-world tests, achieving reduced interaction with non-target objects and a decrease in undesired movements, such as sliding or falling.

Simulation and Real-World Tests

  • Simulation Results: The paper presents a detailed evaluation, showing significant improvements in safety metrics, including the sum of translations and velocities of non-target objects, indicating that SafePicking reduces disruptions during extraction.
  • Real-World Implementation: The system was tested on an actual robotic platform (Franka Emika Panda) with an onboard RGB-D camera (Realsense D435), revealing a direct application of the system's capabilities in practical environments. The use of heightmap-based metrics for real-world performance indication demonstrated that SafePicking maintains robustness and operational efficiency.

Technological Implications

The incorporation of learning-based models with semantic scene understanding represents a significant advance in robotic manipulation. The demonstrated ability to perform safe object extraction implies potential in various applications, particularly where delicate or densely packed items are involved. This has implications for improving automation in fields such as warehousing, where handling efficiency and object safety are paramount.

Future Prospects

The paper suggests several avenues for future research, including extending to long-term manipulation tasks that involve integrated grasping and placement, expanding the scope to handle a wider variety of objects beyond rigid bodies, and further exploiting semantic scene understanding within learning-based systems. Such developments could deepen the integration of robotic systems into new domains, increasing their versatility and operational safety.

In summary, SafePicking lays the groundwork for more sophisticated robotic systems capable of nuanced manipulation tasks, emphasizing both efficiency and the safety of objects handled. By bridging the gap between object recognition and motion planning, SafePicking represents a meaningful step toward more generalized and effective robotic automation solutions.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.