Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-based indoor localization of nano drones in controlled environment with its applications (2412.08757v1)

Published 11 Dec 2024 in cs.RO

Abstract: Navigating unmanned aerial vehicles in environments where GPS signals are unavailable poses a compelling and intricate challenge. This challenge is further heightened when dealing with Nano Aerial Vehicles (NAVs) due to their compact size, payload restrictions, and computational capabilities. This paper proposes an approach for localization using off-board computing, an off-board monocular camera, and modified open-source algorithms. The proposed method uses three parallel proportional-integral-derivative controllers on the off-board computer to provide velocity corrections via wireless communication, stabilizing the NAV in a custom-controlled environment. Featuring a 3.1cm localization error and a modest setup cost of 50 USD, this approach proves optimal for environments where cost considerations are paramount. It is especially well-suited for applications like teaching drone control in academic institutions, where the specified error margin is deemed acceptable. Various applications are designed to validate the proposed technique, such as landing the NAV on a moving ground vehicle, path planning in a 3D space, and localizing multi-NAVs. The created package is openly available at https://github.com/simmubhangu/eyantra_drone to foster research in this field.

Summary

  • The paper introduces a cost-effective vision-based localization using a modified WhyCon system to precisely navigate nano drones indoors.
  • Experimental results demonstrate localization accuracy within 3.1 cm and a latency of 0.341 s, outperforming traditional fiducial marker systems.
  • Integration with multi-agent PID control enables practical applications such as autonomous landing, obstacle-free path planning, and coordinated drone formations.

Vision-Based Indoor Localization of Nano Drones in Controlled Environments with Its Applications: A Comprehensive Overview

The paper presents an innovative approach to the localization of Nano Aerial Vehicles (NAVs) in indoor environments where traditional GPS-based navigation is ineffective. The primary focus is on achieving cost-effective, efficient localization using vision-based methods suited for the constraints of NAVs, which include limited payload capacity and computational power.

Methodology and System Design

The authors propose using a modified WhyCon system for visual localization, emphasizing its affordability and simplicity in comparison to other fiducial marker systems like AprilTags or ArUco markers. The WhyCon system's efficiency in processing time—outperforming alternatives by up to 100 times—is a significant advantage, particularly in environments where computational resources must be minimized. The system is bolstered by integrating it with a multi-agent control framework, which utilizes parallel PID controllers for stabilizing the NAVs. These provide fine control over roll, pitch, and throttle, proving effective in maintaining the drones' positions with minimal steady-state error.

Experimental Setup and Results

The experimental setup comprises overhead cameras and customized control algorithms to track the NAVs, using WhyCon markers to provide real-time feedback for navigation. The results demonstrate localization accuracy within 3.1 cm and a system latency of approximately 0.341 seconds. This level of precision is substantial for applications in academic contexts, educational testing, and other scenarios where cost constraints are critical.

The authors validate the system through several practical applications:

  • Autonomous Landing on Moving Platforms: Leveraging the fixed field of view from the overhead camera enhances landing precision on mobile targets.
  • Path Planning and Navigation: Implementations utilize the Open Motion Planning Library (OMPL) with algorithms like RRT* to navigate through obstacles, illustrating the system’s capability to deliver collision-free paths in a 3D space.
  • Multi-Drone Control: Demonstrations of coordinated drone movements, including circular and square formations, further highlight the system's potential for orchestrated drone swarm applications.

Implications and Future Directions

The integration of robust visual localization with low-cost components such as NAVs and WhyCon markers provides a scalable and accessible solution for academic and research settings. The research showcases practical implementations that can democratize access to advanced drone control techniques, especially in educational environments where budgetary constraints may otherwise limit exploratory and teaching opportunities.

Given the successful implementation in controlled environments, future research could explore the scalability of this system with multiple monocular cameras to cover larger areas, thereby extending the range and applicability of this solution. Advancements could also include more sophisticated algorithms for real-time adjustments to the system's parameters, potentially leading to enhanced robustness and adaptability in diverse and dynamic indoor environments.

The research lays foundational work for further development and application of indoor NAV systems, setting the stage for innovations in areas such as warehouse automation, search and rescue operations, and intricate indoor navigational tasks where the traditional outdoor GPS systems fall short.