Handling of USAR Void-Space Visual Conditions by SLAM and SfM Algorithms
Determine how widely used online mapping algorithms in robotics—specifically ORB-SLAM2, NerfSLAM, and RTAB-Map—and offline Structure-from-Motion pipelines (e.g., COLMAP) handle dim and inconsistent illumination, sparse or low-contrast visual features, and short camera-to-scene working distances encountered inside the void spaces of collapsed structures typical of urban search and rescue operations.
References
While several of these exemplar environments have similar aspects to those of collapsed structures, it is unknown how these general classes of algorithms will handle the dim, inconsistent lighting, the general lack of distinct visual features, and the short working distance from the scene to the camera --- all hallmarks of search and rescue robotics.