Papers
Topics
Authors
Recent
2000 character limit reached

Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor (1704.03955v1)

Published 13 Apr 2017 in cs.RO

Abstract: Hardness is among the most important attributes of an object that humans learn about through touch. However, approaches for robots to estimate hardness are limited, due to the lack of information provided by current tactile sensors. In this work, we address these limitations by introducing a novel method for hardness estimation, based on the GelSight tactile sensor, and the method does not require accurate control of contact conditions or the shape of objects. A GelSight has a soft contact interface, and provides high resolution tactile images of contact geometry, as well as contact force and slip conditions. In this paper, we try to use the sensor to measure hardness of objects with multiple shapes, under a loosely controlled contact condition. The contact is made manually or by a robot hand, while the force and trajectory are unknown and uneven. We analyze the data using a deep constitutional (and recurrent) neural network. Experiments show that the neural net model can estimate the hardness of objects with different shapes and hardness ranging from 8 to 87 in Shore 00 scale.

Citations (158)

Summary

Shape-Independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor

The paper presents an innovative approach to hardness estimation by introducing a method that leverages the GelSight tactile sensor combined with deep learning techniques. Traditional robotic methods for hardness measurement often depend on strict control of contact conditions or object shapes, which can limit practical applications. This paper proposes a solution that contends with these limitations by employing an image-based tactile sensing mechanism.

The GelSight sensor utilizes a soft contact interface to achieve high-resolution tactile images, capturing both the contact geometry and the associated forces. This data is the cornerstone for a deep learning model engineered to estimate hardness regardless of an object's shape. The authors capitalized on the capabilities of convolutional and recurrent neural networks (CNNs and RNNs) to interpret sequences of GelSight-derived tactile images, achieving predictions on material hardness that range between 8 and 87 on the Shore 00 scale.

Methodology

The approach involves using a GelSight sensor to press on objects and capture a sequence of tactile images showcasing deformation and force distribution. By training a neural network with these images, the researchers aimed to predict hardness without speculating on object geometry. The architecture integrates a CNN for spatial feature extraction from the images, followed by a long short-term memory (LSTM) network to handle temporal dynamics present in the sequence.

Results and Performance

The experimentation primarily focuses on silicone samples with distinct shapes and hardness levels. Strong predictive performance was achieved with basic geometric shapes when the hardness varied but the shape was known (R² > 0.95, RMSE = 5.18). The model extends to unseen shapes, albeit with reduced accuracy (R² = 0.7868, RMSE = 11.05), highlighting challenges that arise from complex geometries. Moreover, the efficacy of the model was demonstrated with data collected using robotic grippers—a scenario mimicking potential real-world applications—achieving a commendable performance (R² = 0.8524, RMSE = 10.28).

The paper also explored applications with natural objects such as tomatoes and candies, indicating that the sensor could potentially aid in evaluating ripeness based on tactile hardness. Nevertheless, the authors note that estimation errors occur with objects possessing intricate surface textures, due to the lack of similar data in the training phase.

Implications and Future Work

The research offers significant theoretical implications, particularly in advancing tactile sensing towards more flexible and generalized applications. Practically, this method could revolutionize robotic interaction with complex and soft materials in various industries, from agriculture to manufacturing, where understanding material properties is crucial.

Looking ahead, expanding the dataset to incorporate broader variations in object shapes and surface textures is necessary to enhance model generalization. Further investigation into multi-modal sensory integration, combining tactile data with other sensory inputs, could propel the field toward even more robust and comprehensive material property inference techniques.

This paper's contribution lies in illustrating a viable path for deploying deep learning techniques to transcend traditional tactile sensing's limitations, marking a significant step forward in robotic perception and interaction capabilities.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com