An Examination of Grasp EveryThing (GET): 1-DoF, 3-Fingered Gripper with Tactile Sensing
The paper introduces the Grasp EveryThing (GET) gripper, a unique 1-DoF three-fingered design aimed at enhancing grasping capabilities across varied object geometries. This paper acknowledges the limitations of traditional parallel jaw grippers with flat fingers, highlighting their inherent instability and inability to securely grasp objects with complex shapes due to minimal contact patches. The GET gripper addresses this through a three-finger configuration, including two V-shaped fingers which meet in a V-formation, and a third opposing finger, increasing conformity to object geometries and torque resistance.
By employing tapered silicone fingerpads along their length, the GET fingers enhance adaptability and grip strength, facilitating secure interactions with diverse object forms. The fingers maximize the versatility of 1-DoF grippers without sacrificing simplicity or ease of integration with existing robotics systems, reflecting a combination of innovative design and engineering practicality.
Beyond mechanical design, the GET gripper integrates tactile sensing into its single opposing finger, utilizing an HDR camera to capture tactile images. The research explores the estimation of normal force from these images using a neural network, revealing an average validation error of 1.3 N across varied geometries. Comparisons with standard two-fingered grippers reveal superior performance by GET fingers in grasping tasks, notably through teleoperation within the ALOHA system.
Key Numerical Results and Claims:
- Force Estimation: The neural network estimates normal force with an average validation error of 1.3 N for various object geometries, demonstrating effective tactile sensing integration.
- Grasp Performance: In trials involving 15 objects, the GET gripper consistently outperformed traditional flat fingers, particularly with tools requiring dynamic manipulation.
- Task Efficiency: GET fingers showed improved task completion times in comparison to baseline fingers, notably in dexterous tasks modeled during teleoperation.
Implications and Future Directions:
The GET gripper extends practical applications in robotics, specifically for grasping diverse household items and performing dynamic manipulation tasks. The integration of tactile sensing adds value by capturing rich contact information, which can be leveraged for learning-based control policies. Conceptually, this design paves the way for more robust, adaptable gripper designs while maintaining operational simplicity. Future advancements could focus on enhancing tactile sensing capabilities by reconstructing 3D depth from captured tactile images, potentially through neural network-driven approaches. Moreover, the flexible nature of the GET design demonstrates potential scalability, suggesting further studies might explore variable finger configurations suited for bimanual manipulation setups.
The GET gripper is poised to enhance robotic manipulation capabilities, with the prospect of integration within broader AI-driven frameworks, where tactile data can complement vision data to achieve improved object interaction strategies.