Papers
Topics
Authors
Recent
2000 character limit reached

Semi-autonomous Prosthesis Control Using Minimal Depth Information and Vibrotactile Feedback

Published 2 Oct 2022 in cs.CV and cs.RO | (2210.00541v1)

Abstract: A semi-autonomous prosthesis control based on computer vision can be used to improve performance while decreasing the cognitive burden, especially when using advanced systems with multiple functions. However, a drawback of this approach is that it relies on the complex processing of a significant amount of data (e.g., a point cloud provided by a depth sensor), which can be a challenge when deploying such a system onto an embedded prosthesis controller. In the present study, therefore, we propose a novel method to reconstruct the shape of the target object using minimal data. Specifically, four concurrent laser scanner lines provide partial contours of the object cross-section. Simple geometry is then used to reconstruct the dimensions and orientation of spherical, cylindrical and cuboid objects. The prototype system was implemented using depth sensor to simulate the scan lines and vibrotactile feedback to aid the user during aiming of the laser towards the target object. The prototype was tested on ten able-bodied volunteers who used the semi-autonomous prosthesis to grasp a set of ten objects of different shape, size and orientation. The novel prototype was compared against the benchmark system, which used the full depth data. The results showed that novel system could be used to successfully handle all the objects, and that the performance improved with training, although it was still somewhat worse compared to the benchmark. The present study is therefore an important step towards building a compact system for embedded depth sensing specialized for prosthesis grasping.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.