Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accurate Localization in Dense Urban Area Using Google Street View Image (1412.8496v1)

Published 29 Dec 2014 in cs.CV

Abstract: Accurate information about the location and orientation of a camera in mobile devices is central to the utilization of location-based services (LBS). Most of such mobile devices rely on GPS data but this data is subject to inaccuracy due to imperfections in the quality of the signal provided by satellites. This shortcoming has spurred the research into improving the accuracy of localization. Since mobile devices have camera, a major thrust of this research has been seeks to acquire the local scene and apply image retrieval techniques by querying a GPS-tagged image database to find the best match for the acquired scene.. The techniques are however computationally demanding and unsuitable for real-time applications such as assistive technology for navigation by the blind and visually impaired which motivated out work. To overcome the high complexity of those techniques, we investigated the use of inertial sensors as an aid in image-retrieval-based approach. Armed with information of media other than images, such as data from the GPS module along with orientation sensors such as accelerometer and gyro, we sought to limit the size of the image set to c search for the best match. Specifically, data from the orientation sensors along with Dilution of precision (DOP) from GPS are used to find the angle of view and estimation of position. We present analysis of the reduction in the image set size for the search as well as simulations to demonstrate the effectiveness in a fast implementation with 98% Estimated Position Error.

Citations (23)

Summary

We haven't generated a summary for this paper yet.