Utility of self-supervised tactile representations for downstream palpation tasks

Determine whether an encoder–decoder representation learned from sequences of tactile measurements on soft bodies can be used for downstream tasks such as tactile imaging and change detection, and ascertain whether, with sufficiently large training data, this representation captures intricate patterns in tactile measurements that go beyond simple force maps.

Background

The paper proposes learning a tactile representation for soft-body palpation using an encoder–decoder framework trained on sequences of tactile measurements. The authors suggest that if the representation encodes sufficient information about the palpated object, it could support downstream clinical tasks such as imaging and change detection.

While the paper presents proof-of-concept simulations and real phantom experiments indicating promise, the broader claim that such self-supervised representations generally enable these downstream tasks and capture patterns beyond force maps is posed as a conjecture in the abstract, emphasizing its open nature.

References

We conjecture that such a representation can be used for downstream tasks such as tactile imaging and change detection. With enough training data, it should capture intricate patterns in the tactile measurements that go beyond a simple map of forces -- the current state of the art.

Toward Artificial Palpation: Representation Learning of Touch on Soft Bodies (2511.16596 - Rimon et al., 20 Nov 2025) in Abstract