Autonomous Drone Race: Computationally Efficient Vision-Based Navigation and Control Strategy
This paper presents the development and testing of an autonomous system for drone racing, focusing on computational efficiency and real-time performance using solely onboard resources. Conducted at the Delft University of Technology, this research is positioned at the intersection of high-speed flight and autonomous navigation, leveraging a vision-based navigation strategy tailored for micro aerial vehicles (MAVs), specifically utilizing a Parrot Bebop 1 drone.
Methodology and System Design
The authors propose a novel, lightweight vision-based navigation method, termed "snake gate detection," designed to efficiently identify race gates using a fish-eye camera with minimal computational overhead. This approach deviates from traditional, computationally intensive methods like simultaneous localization and mapping (SLAM) or visual-inertial odometry (VIO), which are typically unsuitable for MAVs with constrained processing capabilities.
The snake gate detection algorithm operates by sampling points within the image frame and determining their classification through a color-based evaluation, effectively identifying gates through a series of iterative search processes. This method demonstrates a detection rate of 20 Hz, significantly faster than more intensive alternatives.
Following gate detection, the system employs a pose estimation algorithm that integrates the gate detection results with attitude data from an onboard attitude and heading reference system (AHRS). This integration results in robust pose estimates that exhibit greater tolerance to detection noise compared to state-of-the-art perspective-n-point (PnP) methods.
To accommodate scenarios where gates are not constantly within the drone's field of view, the authors developed a state prediction-based feed-forward control strategy. This control mechanism allows the drone to navigate expected trajectories, specifically when direct visual feedback is unavailable, such as during high-speed turns.
Experimental Validation and Results
The experiment took place in a controlled environment and later in a more challenging setting—a showroom replete with dense obstacles and narrow pathways. The autonomous system demonstrated the ability to complete a track consisting of 15 gates at a velocity of 1.5 m/s. Importantly, this speed surpasses that of systems employed in previous competitions, notably the 2016 and 2017 IROS autonomous drone races.
Key performance metrics include the ability of the drone to navigate a half-circle with a 1.5-meter radius in under 2 seconds, maintaining a terminal error of only 30 cm, illustrating the precision achievable without continuous position feedback from visual detection.
Implications and Future Directions
This work exemplifies the feasibility of executing complex navigation tasks using onboard resources for real-time processing, a critical aspect for the scalability of autonomous UAV systems in diverse applications. The results indicate potential applications where speed and computational efficiency are paramount, such as in autonomous inspections, search and rescue operations, and urban air mobility systems.
Future research may explore enhancements to the visual processing methods, possibly incorporating machine learning techniques for more robust feature detection across varying environments and lighting conditions. Additionally, integrating optimization techniques in flight control could further improve trajectory efficiency and control accuracy, moving closer to or exceeding human pilot performance levels in analogous tasks.
This paper underlines the growing capability of MAVs in performing high-speed, autonomous navigation, emphasizing advancements that may fuel continued research and eventual practical deployment of fully autonomous drone systems in a variety of dynamic and complex settings.