Papers
Topics
Authors
Recent
Search
2000 character limit reached

SANIP: Shopping Assistant and Navigation for the visually impaired

Published 8 Sep 2022 in cs.CV | (2209.03570v1)

Abstract: The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.