- The paper introduces a dual-graph design that integrates asynchronous IMU, lidar, and GNSS data to maintain consistent state estimation during sensor dropouts.
- The paper leverages a multi-threaded prediction-update loop to achieve near-real-time performance with sub-centimeter global accuracy.
- The paper validates the approach on heavy construction robots, demonstrating its superiority over traditional filters in dynamic, complex environments.
Graph-based Multi-sensor Fusion for Consistent Localization of Autonomous Construction Robots
This paper addresses the critical requirement for robust and accurate state-estimation and localization of autonomous construction robots, with a specific focus on large-scale construction machines, such as excavators. These machines play a pivotal role in various industries, and their autonomy has been recognized for its potential to enhance safety and operational capabilities in hazardous environments. The authors propose a graph-based approach to multi-sensor fusion that integrates IMU, lidar, and GNSS data to achieve consistent and high-frequency state estimates.
The paper presents a dual-graph design within a prediction-update loop framework, aiming to address the inherent challenges faced in dynamically changing and complex environments. This design allows for the seamless integration of asynchronous measurements and effectively manages cases of sensor dropout, ensuring robust performance in real-world applications. The implementation leverages the GTSAM framework, with sensor inputs from a Leica GNSS system and an Ouster lidar, and undergoes validation on two Menzi Muck walking excavators.
Key aspects of this approach include the combination of filtering and smoothing methods, which traditionally trade off between speed and accuracy. The prediction-update loop capitalizes on multi-threaded architecture to achieve low-latency, near-real-time state estimations necessary for control tasks, while maintaining global consistency through the use of graph optimization. Such an approach is particularly beneficial in handling delayed and nonlinear sensor data—a significant limitation in conventional filtering techniques.
The dual-graph system is particularly notable for its strategy to handle GNSS dropouts, a common issue due to environmental occlusions such as dense foliage or urban canyons. By retaining the consistency of pose estimates using lidar data during GNSS outages, the system exhibits resilience, allowing for smooth transitions when GNSS data is reacquired. This capability is validated in practical settings, demonstrating the system's ability to build accurate maps and effectively manage global localization corrections upon GNSS signal restoration.
The empirical evaluation reveals the successful deployment of the proposed method under real-world scenarios, highlighting its efficacy in comparison to existing methods like the Two-State Implicit Filter (TSIF) and traditional multi-sensor fusion approaches. The numerical results from these evaluations underline the proposed method's ability to maintain sub-centimeter level global accuracy and consistency, attesting to its robustness in practical autonomous navigation tasks.
In conclusion, this paper contributes significantly to the field of robotics by presenting a multi-modal sensor fusion technique that addresses the challenges of autonomous operation in construction environments. Its findings enhance our understanding of reliable localization systems in outdoor and dynamic settings, encouraging further research into flexible sensor fusion frameworks. Future work could explore the joint optimization of additional parameters, such as chassis orientation, encoder biases, and sensor time-offsets, to further refine the system's precision and applicability. This exploration is essential for advancing autonomous capabilities in construction robotics and other industrial applications.