- The paper presents VILENS, a tightly coupled multi-sensor fusion framework that reduces translational and rotational errors by 62% and 51%, respectively.
- It integrates visual, inertial, lidar, and leg odometry data within a factor graph to overcome limitations of single-sensor systems in extreme terrains.
- Experimental validation on ANYmal quadruped robots demonstrates enhanced robustness, paving the way for reliable autonomous navigation under adverse conditions.
VILENS: Advancing Multi-Sensor Fusion for Legged Robots
The paper presents VILENS, a robust odometry system designed for all-terrain legged robots, with an innovative fusion of visual, inertial, lidar, and leg odometry data within a factor graph framework. Challenging the limitations of single-modality systems, VILENS is tailored for scenarios where individual sensors fail due to environmental constraints, providing a comprehensive solution for state estimation in extreme conditions.
The core novelty of the VILENS system lies in its ability to integrate data from multiple sensors tightly, an approach that enhances the reliability of pose estimation. This tight fusion is particularly significant when sensors individually would produce degenerate estimation. Notably, the system introduces a velocity bias term to the leg odometry inputs. This term aids in compensating for drift typical in legged locomotion on complex terrains. The bias is observable due to the integration of multi-modal inputs, which collectively inform a more accurate estimation than any single modality could achieve.
The experimental validation of VILENS, conducted using ANYmal quadruped robots over a diverse range of terrains, demonstrates its practical efficacy. The system showed substantial performance improvements, reducing translational and rotational errors by 62% and 51% respectively compared to a state-of-the-art loosely coupled baseline. These metrics underscore the effectiveness of VILENS in reducing estimation drift and error across challenging environments, including loose rocks, slopes, mud, and regions with poor visual features, such as dark caves and open fields.
The paper also showcases the benefits of integrating advanced components such as perceptive controllers and local path planners with VILENS to elevate robustness in decision-making processes, offering a comprehensive system ready for real-world deployment in adverse conditions, such as those encountered in the DARPA Subterranean Challenge.
Beyond the immediate results, the implications of this work extend to the broader landscape of field robotics, where evolving sensor fusion techniques can significantly enhance autonomy and adaptability. The ability to accurately estimate state information despite complex interactions with unpredictable terrains broadens the operational envelope of legged robots, paving the way for enhanced exploratory and autonomous functions.
Looking forward, the development opens up avenues for refining sensor fusion algorithms further and potentially adapting them to other mobile robotic systems. Future advancements could focus on optimizing computational efficiency and exploring machine learning techniques to auto-tune system parameters for diverse environments, streamlining the deployment of VILENS in an array of robotic platforms. In sum, the contribution to the field of robotic perception and navigation is notable, setting a precedent for robust, multi-sensor integrated systems.