Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Real-Time Grasping Strategies Using Event Camera (2107.07200v1)

Published 15 Jul 2021 in cs.RO

Abstract: Robotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. Compared with standard frame-based vision, neuromorphic vision has advantages of microsecond-level sampling rate and no motion blur. Building on that, the model-based and model-free approaches are developed for known and unknown objects' grasping respectively. For the model-based approach, event-based multi-view approach is used to localize the objects in the scene, and then point cloud processing allows for the clustering and registering of objects. Differently, the proposed model-free approach utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. The proposed approaches are experimentally validated with objects of different sizes, using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper. Moreover, the robustness of the two proposed event-based grasping approaches are validated in a low-light environment. This low-light operating ability shows a great advantage over the grasping using the standard frame-based vision. Furthermore, the developed model-free approach demonstrates the advantage of dealing with unknown object without prior knowledge compared to the proposed model-based approach.

Citations (25)

Summary

  • The paper introduces two novel robotic grasping frameworks that use event cameras to mitigate motion blur and sampling rate limitations.
  • It details a model-based approach with EMVS and ICP-SVD for known objects, alongside a model-free method with MEMS segmentation and velocity-based servoing for unknown targets.
  • Experimental results validate improved grasping accuracy in dynamic and low-light environments, underscoring the potential of neuromorphic vision in robotics.

Real-Time Grasping Strategies Using Event Camera

This paper introduces a novel framework for robotic grasping utilizing event cameras, which presents a shift from conventional frame-based vision to neuromorphic event-driven sensing. Traditional frame-based robotic vision systems have inherent limitations such as motion blur and low sampling rates, which restrict their effectiveness in dynamic and low-light environments. Event cameras, on the other hand, offer microsecond-level sampling and are robust against motion blur, providing a promising alternative for real-time robotic applications.

Framework Overview

The paper proposes two primary approaches for grasping: a model-based approach (MBA) and a model-free approach (MFA), both utilizing neuromorphic vision sensors. The MBA leverages prior knowledge of the objects' models, making it suitable for scenarios involving known objects. It incorporates techniques such as event-based multi-view stereo (EMVS) for localization, point cloud down-sampling, object clustering using Euclidean distance, and iterative model registration using ICP with singular value decomposition (SVD) for pose estimation.

Conversely, the MFA is designed for unknown or unmodeled objects and employs a purely data-driven methodology. It involves event-based segmentation using a multi-object event-based mean shift (MEMS) algorithm, which uniquely incorporates spatial and temporal event data for segmentation. This approach does not rely on pre-existing object models and uses velocity-based visual servoing with depth information to guide the manipulator to the correct grasping position.

Experimental Validation

The authors have conducted extensive experiments to validate both approaches on tasks involving grasping objects of varying sizes and shapes. The experimental setup includes a UR10 robotic arm equipped with a Barrett hand gripper and a DAVIS346 event camera in an eye-in-hand configuration. Performance is assessed in both normal and low-light conditions, highlighting the robustness of event-based vision.

The MBA offers higher precision due to the availability of pre-existing object models, proving effective for industrial applications where object models are predefined. However, this approach is limited by its reliance on prior knowledge. In contrast, the MFA demonstrates flexibility in grasping unknown objects, proving effective even in scenarios with imperfect perception or dynamically changing environments. Its ability to adapt to unknown geometries makes it suitable for diverse industrial applications where prior object models are unavailable.

Implications and Future Prospects

Both approaches show promise in enhancing the adaptability and robustness of robotic grasping systems. Event cameras can effectively address the challenges posed by dynamic and low-light environments common in industrial settings. The integration of high-sensitivity, asynchronous sensing with advanced algorithms for visual processing could pave the way for more autonomous and real-time capable robotic systems.

Future research could explore hybrid approaches that combine the strengths of both model-based and model-free methodologies, incorporating machine learning techniques to enhance adaptability further. There is also potential for cross-domain applications, beyond industrial settings, where rapid and reliable object handling is critical. As AI and machine learning models develop, the integration of neuromorphic sensing with intelligent systems could transform the landscape of robotic manipulation, making it more dynamic and resilient to environmental changes.

Youtube Logo Streamline Icon: https://streamlinehq.com