- The paper introduces a decentralized state estimation framework that integrates visual-inertial and UWB sensors for robust aerial swarm operations.
- The methodology employs stereo wide-FoV cameras, UWB sensors, and graph-based optimization to mitigate drift and ensure global consistency.
- Experimental results demonstrate cm-level accuracy and plug-and-play capability, enhancing safety and adaptability in GPS-denied environments.
Overview of Omni-swarm: A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarms
The paper presents Omni-swarm, a decentralized solution designed to enhance state estimation accuracy within aerial swarms operating in GPS-denied environments. This system specifically addresses the key challenges of observability, initialization complexity, accuracy deficiencies, and global consistency through the integration of omnidirectional perception provided by visual-inertial-UWB sensing and state estimation techniques.
Measurement System and Methodology
Omni-swarm employs a combination of stereo wide-FoV cameras and UWB sensors to create an omnidirectional perception front-end. This configuration captures comprehensive environmental information and supports a visual-inertial odometry (VIO) mechanism for ego-motion estimation within each drone. Furthermore, the system leverages multi-drone map-based localization (MDML), enabling effective detection of previously visited areas and ensuring global consistency by mitigating odometry drift over time. Visual Drone Tracking (VDT) augments this framework through use of an object detection system to locate and track drone positions dynamically, enhancing pose estimation accuracy.
In the back-end, Omnidirectional measurements undergo graph-based optimization facilitated by cutting-edge outlier rejection techniques. These measurements delineate ego-motion as well as inter-drone positioning. Through fusing VIO results with UWB-derived distance measurements, the system achieves relative state estimation accuracy up to the centimeter level, assuring collision avoidance within the swarm without reliance on external infrastructure.
Results and Implications
The experimental results validate Omni-swarm’s proficiency in delivering a stable and precise decentralized state estimation framework. It extends observability across varying environments, improving initialization procedures and compensating for individual drone drift through map-based localization. The key advantage provided is the plug-and-play feature—enabling drones to dynamically join or exit the swarm without compromising the stability of estimation, thereby showcasing robustness to potential failures in communication or sensory data.
Comparisons and Future Prospects
In comparative tests with other existing methods, namely prior UWB-odometry fusion approaches and VIO-only methods, Omni-swarm demonstrated superior performance, particularly under parallel flight formations where observational conditions of drones are not optimal. Despite its strengths, scalability limitations arise with increased swarm size due to the quadratic computational complexity associated with factor graph optimization, alongside the intrinsic reliance on camera calibrations.
The adaptability and accuracy offered by Omni-swarm's framework pave the way for further deployment within real-world autonomous flight operations. Future explorations may focus on scaling Omni-swarm for larger fleets, enhancing communication strategies, and developing on-the-fly calibration and correction mechanisms to further bolster system resilience and functionality.
In conclusion, Omni-swarm lays the foundation for more reliable collaborative and autonomous aerial operations, offering significant enhancements in handling decentralized state estimation across complex environments, thereby broadening the operating scope of aerial swarms.