Papers
Topics
Authors
Recent
2000 character limit reached

Self-Supervised Monocular Visual Drone Model Identification through Improved Occlusion Handling

Published 30 Apr 2025 in cs.RO and cs.AI | (2504.21695v1)

Abstract: Ego-motion estimation is vital for drones when flying in GPS-denied environments. Vision-based methods struggle when flight speed increases and close-by objects lead to difficult visual conditions with considerable motion blur and large occlusions. To tackle this, vision is typically complemented by state estimation filters that combine a drone model with inertial measurements. However, these drone models are currently learned in a supervised manner with ground-truth data from external motion capture systems, limiting scalability to different environments and drones. In this work, we propose a self-supervised learning scheme to train a neural-network-based drone model using only onboard monocular video and flight controller data (IMU and motor feedback). We achieve this by first training a self-supervised relative pose estimation model, which then serves as a teacher for the drone model. To allow this to work at high speed close to obstacles, we propose an improved occlusion handling method for training self-supervised pose estimation models. Due to this method, the root mean squared error of resulting odometry estimates is reduced by an average of 15%. Moreover, the student neural drone model can be successfully obtained from the onboard data. It even becomes more accurate at higher speeds compared to its teacher, the self-supervised vision-based model. We demonstrate the value of the neural drone model by integrating it into a traditional filter-based VIO system (ROVIO), resulting in superior odometry accuracy on aggressive 3D racing trajectories near obstacles. Self-supervised learning of ego-motion estimation represents a significant step toward bridging the gap between flying in controlled, expensive lab environments and real-world drone applications. The fusion of vision and drone models will enable higher-speed flight and improve state estimation, on any drone in any environment.

Summary

Self-Supervised Monocular Visual Drone Model Identification through Improved Occlusion Handling

The paper "Self-Supervised Monocular Visual Drone Model Identification through Improved Occlusion Handling" addresses a significant challenge in autonomous drone navigation, specifically ego-motion estimation in GPS-denied environments. Traditional vision-based methods often encounter difficulties under high-speed conditions due to motion blur and occlusions caused by objects close to the drone. To enhance the robustness and scalability of these methods, the authors propose a self-supervised learning approach that leverages onboard monocular video and flight controller data, thereby eliminating the dependency on external motion capture systems for supervised learning.

The methodology involves training a neural-network-based drone model through a self-supervised relative pose estimation framework. This model combines monocular vision data with inertial measurements (IMU) and motor feedback, fostering scalability across different environments and drone types. A pivotal advancement in this work is the improved occlusion handling during pose estimation training, which significantly reduces errors in odometry estimates.

Numerical results demonstrate that due to the enhanced occlusion handling technique, the root mean squared error (RMSE) of odometry estimation is reduced by an average of 15%. Furthermore, the learned drone model performs exceedingly well at higher flight speeds, surpassing traditional vision-based pose estimations in accuracy. The authors have successfully integrated the neural drone model into the ROVIO system, achieving superior odometry accuracy, particularly in aggressive 3D racing maneuvers.

The implications of this research are multifaceted. Practically, it represents a significant step toward enabling high-speed, reliable flight in diverse and unstructured environments without necessitating external reference systems. Theoretically, it underlines the efficacy of self-supervised learning approaches in compensating for inherent limitations in visual odometry, such as scale ambiguity and sensitivity to motion blur and occlusions. This work opens avenues for further exploration into self-supervised learning across other areas of autonomous navigation, emphasizing the potential for deep neural networks to enhance sensory fusion techniques.

Future developments may focus on expanding this approach to dynamic environments, where the additional frame used in the 3F method could mitigate the impact of moving objects. Moreover, exploring reinforcement learning alongside self-supervised techniques could yield robust solutions that adaptively improve navigation performance under varying conditions.

In conclusion, this paper offers a promising direction in the pursuit of scalable, efficient, and robust autonomous drone navigation systems, emphasizing the pivotal role of self-supervised neural models in overcoming traditional challenges associated with monocular vision-based odometry.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.