Papers
Topics
Authors
Recent
Search
2000 character limit reached

Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations

Published 23 Feb 2018 in cs.RO | (1802.08617v1)

Abstract: Surveying 3D scenes is a common task in robotics. Systems can do so autonomously by iteratively obtaining measurements. This process of planning observations to improve the model of a scene is called Next Best View (NBV) planning. NBV planning approaches often use either volumetric (e.g., voxel grids) or surface (e.g., triangulated meshes) representations. Volumetric approaches generalise well between scenes as they do not depend on surface geometry but do not scale to high-resolution models of large scenes. Surface representations can obtain high-resolution models at any scale but often require tuning of unintuitive parameters or multiple survey stages. This paper presents a scene-model-free NBV planning approach with a density representation. The Surface Edge Explorer (SEE) uses the density of current measurements to detect and explore observed surface boundaries. This approach is shown experimentally to provide better surface coverage in lower computation time than the evaluated state-of-the-art volumetric approaches while moving equivalent distances.

Citations (29)

Summary

  • The paper presents a novel density-based NBV planning approach using frontier detection and local surface geometry estimation.
  • It employs eigendecomposition to estimate planar surfaces and guides view generation for optimal sensor coverage.
  • SEE demonstrates improved surface coverage and reduced computational time compared to traditional volumetric methods in large-scale scenes.

Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations

Introduction

The paper introduces a novel scene-model-free approach to Next Best View (NBV) planning known as Surface Edge Explorer (SEE). SEE utilizes a density representation to enhance the observation of 3D scenes autonomously, improving surface coverage efficiently in less computation time compared to traditional volumetric approaches. It focuses on overcoming the limitations of volumetric and surface-based NBV planning methods, providing a scalable solution suitable for large-scale scene modeling.

Methodology

Frontier Detection

SEE's approach classifies points into core, frontier, or outlier categories based on local density, utilizing a method inspired by DBSCAN. This classification isolates frontiers, representing boundaries between observed and partially observed surfaces. By relying on measurement density and resolution, SEE avoids the extensive computational requirements of volumetric methods. Figure 1

Figure 1: An illustration of SEE's density-based classification. Points with a sufficient number of neighbours are classified as core points (black) while those without are outlier points (white). Points with both core points and outlier points in their neighbourhood are frontier points (grey).

Surface Geometry Estimation

The local surface geometry around each frontier point is approximated as planar through eigendecomposition, allowing the estimation of the normal, boundary, and frontier vectors. These vectors guide view generation to ensure optimal sensor coverage. Figure 2

Figure 2: An illustration of SEE's local surface geometry estimation. The geometry of the surface at the frontier points (grey) is estimated from nearby points with an orthogonal set of vectors. These vectors are orientated normal to the surface, ee (out of the page), parallel to the boundary line, ee and perpendicular to the boundary line (i.e., into the frontier), ee.

View Generation

View positions are generated orthogonally to the estimated surface, aiming to maximize coverage of the boundary surfaces. The views are iteratively adjusted to accommodate surface discontinuities, optimizing observation fidelity. Figure 3

Figure 3: An illustration of SEE's initial view proposal generation. Initial view proposals, (xx, ),aregeneratedaroundeachfrontierpoint(grey)fromtheestimatedlocalsurfacegeometry,), are generated around each frontier point (grey) from the estimated local surface geometry, e,, eand and e.Thevieworientation,. The view orientation,, is given by the inverse sign of the normal vector, =−e= -e. The view position, xx, is set at a view distance, dvd_\mathrm{v}.

Evaluation

Experimental Setup

SEE was evaluated using simulations of standard 3D models, including the Stanford Bunny and Radcliffe Camera. The observed diffraction and occlusion were assessed, comparing SEE against volumetric approaches such as AF, AE, and OA. Measurements were simulated with Gaussian noise to approximate real-world scenarios. Figure 4

Figure 4

Figure 4: A comparison of the point cloud resulting from running SEE (a) and AE {Kriegel2015} (b) on a full-scale model of the Radcliffe Camera in Oxford.

Results

SEE demonstrated superior surface coverage with reduced computation time, effectively scaling to complex and large scenes. It maintained equivalent travel distances when compared to the evaluated volumetric approaches, showcasing its adaptability and efficiency.

Discussion

SEE's density-based NBV planning significantly reduces computational complexity, especially for large-scale scenes like architectural models where volumetric approaches become inefficient. Its intuitive parameterization avoids the cumbersome tuning associated with traditional surface methods.

Conclusion

SEE presents a robust and efficient scene-model-free NBV planning method capable of achieving high-resolution 3D scene models. Its density representation offers practical advantages over existing volumetric and surface approaches. Ongoing development aims to expand the comparison to surface approaches and real-world deployment with aerial platforms.

In summary, SEE provides a powerful tool for automated 3D scene observation, enhancing model resolution while minimizing computational burden, making it particularly beneficial for applications requiring detailed and large-scale 3D mapping.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.