Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LSGDDN-LCD: An Appearance-based Loop Closure Detection using Local Superpixel Grid Descriptors and Incremental Dynamic Nodes (2304.03872v2)

Published 8 Apr 2023 in cs.CV and cs.RO

Abstract: Loop Closure Detection (LCD) is an essential component of visual simultaneous localization and mapping (SLAM) systems. It enables the recognition of previously visited scenes to eliminate pose and map estimate drifts arising from long-term exploration. However, current appearance-based LCD methods face significant challenges, including high computational costs, viewpoint variance, and dynamic objects in scenes. This paper introduced an online appearance based LCD using local superpixel grids descriptor and dynamic node, i.e, LSGDDN-LCD, to find similarities between scenes via hand-crafted features extracted from LSGD. Unlike traditional Bag-of-Words (BoW) based LCD, which requires pre-training, we proposed an adaptive mechanism to group similar images called $\textbf{\textit{dynamic}}$ $\textbf{\textit{node}}$, which incrementally adjusted the database in an online manner, allowing for efficient and online retrieval of previously viewed images without need of the pre-training. Experimental results confirmed that the LSGDDN-LCD significantly improved LCD precision-recall and efficiency, and outperformed several state-of-the-art (SOTA) approaches on multiple typical datasets, indicating its great potential as a generic LCD framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” in Proc. Eur. Conf. Comput. Vis., pp. 778–792, 2010.
  2. B. Bescos, J. M. Fácil, J. Civera, and J. Neira, “Dynaslam: Tracking, mapping, and inpainting in dynamic scenes,” IEEE Robot. Automat. Lett., vol. 3, no. 4, pp. 4076–4083, 2018.
  3. X. Yang and K.-T. T. Cheng, “Local difference binary for ultrafast and distinctive feature description,” IIEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 1, pp. 188–194, 2013.
  4. P. F. Alcantarilla and T. Solutions, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 34, pp. 1281–1298, Sep. 2011.
  5. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” Lect. Notes Comput. Sci., vol. 3951, pp. 404–417, 2006.
  6. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision., vol. 60, pp. 91–110, 2004.
  7. D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 2161–2168, Feb. 2006.
  8. M. Cummins and P. Newman, “Fab-map: Probabilistic localization and mapping in the space of appearance,” Int. J. Robot. Res., vol. 27, no. 6, pp. 647–665, 2008.
  9. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in Proc. Int. Conf. Comput. Vis., pp. 2564–2571, Nov. 2011.
  10. P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “Kaze features,” in Proc. Eur. Conf. Comput. Vis., pp. 214–227, Oct. 2012.
  11. A. Angeli, D. Filliat, S. Doncieux, and J.-A. Meyer, “Fast and incremental method for loop-closure detection using bags of visual words,” IEEE Trans. Rob., vol. 24, no. 5, pp. 1027–1037, 2008.
  12. D. Gálvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. Rob., vol. 28, no. 5, pp. 1188–1197, 2012.
  13. T. Nicosevici and R. Garcia, “Automatic visual bag-of-words for online robot navigation and mapping,” IEEE Trans. Rob., vol. 28, no. 4, pp. 886–898, 2012.
  14. S. Khan and D. Wollherr, “Ibuild: Incremental bag of binary words for appearance based loop closure detection,” in Proc. Int. Conf. Robot. Autom., pp. 5441–5447, 2015.
  15. R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5297–5307, 2016.
  16. A. Torii, R. Arandjelovic, J. Sivic, M. Okutomi, and T. Pajdla, “24/7 place recognition by view synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1808–1817, Jun. 2015.
  17. J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” Int. J. Comput. Vision., vol. 105, pp. 222–245, 2013.
  18. S. An, G. Che, F. Zhou, X. Liu, X. Ma, and Y. Chen, “Fast and incremental loop closure detection using proximity graphs,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 378–385, 2019.
  19. T. Naseer, L. Spinello, W. Burgard, and C. Stachniss, “Robust visual robot localization across seasons using network flows,” in AAAI – AAAI Conf. Artif. Intell., vol. 28, 2014.
  20. M. Milford, E. Vig, W. Scheirer, and D. Cox, “Towards condition-invariant, top-down visual place recognition,” in Aca. Conf. Rob. Auto., pp. 1–10, 2013.
  21. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “Slic superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282, 2012.
  22. E. Garcia-Fidalgo and A. Ortiz, “ibow-lcd: An appearance-based loop-closure detection approach using incremental bags of binary words,” IEEE Robot. Automat. Lett., vol. 3, no. 4, pp. 3051–3057, 2018.
  23. C. Park, S. Kim, P. Moghadam, J. Guo, S. Sridharan, and C. Fookes, “Robust photogeometric localization over time for map-centric loop closure,” IEEE Robot. Automat. Lett., vol. 4, no. 2, pp. 1768–1775, 2019.
  24. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3354–3361, 2012.

Summary

We haven't generated a summary for this paper yet.