Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling Tiny and Ultra-fast Deep Neural Networks for Autonomous Navigation on Nano-UAVs (2407.12675v1)

Published 17 Jul 2024 in eess.IV, cs.SY, and eess.SY

Abstract: Nano-sized unmanned aerial vehicles (UAVs) are ideal candidates for flying Internet-of-Things smart sensors to collect information in narrow spaces. This requires ultra-fast navigation under very tight memory/computation constraints. The PULP-Dronet convolutional neural network (CNN) enables autonomous navigation running aboard a nano-UAV at 19 frame/s, at the cost of a large memory footprint of 320 kB -- and with drone control in complex scenarios hindered by the disjoint training of collision avoidance and steering capabilities. In this work, we distill a novel family of CNNs with better capabilities than PULP-Dronet, but memory footprint reduced by up to 168x (down to 2.9 kB), achieving an inference rate of up to 139 frame/s; we collect a new open-source unified collision/steering 66 k images dataset for more robust navigation; and we perform a thorough in-field analysis of both PULP-Dronet and our tiny CNNs running on a commercially available nano-UAV. Our tiniest CNN, called Tiny-PULP-Dronet v3, navigates with a 100% success rate a challenging and never-seen-before path, composed of a narrow obstacle-populated corridor and a 180{\deg} turn, at a maximum target speed of 0.5 m/s. In the same scenario, the SoA PULP-Dronet consistently fails despite having 168x more parameters.

Citations (1)

Summary

  • The paper introduces efficient CNN models achieving 139 fps and up to 168× memory reduction on nano-UAV hardware.
  • It employs 8-bit quantization and advanced CNN modules to enable ultra-low-power, real-time autonomous navigation.
  • Field tests confirm robust collision avoidance with 100% success on challenging U-shaped paths.

Overview of the Paper on Tiny and Ultra-fast Deep Neural Networks for Autonomous Navigation on Nano-UAVs

The paper, "Distilling Tiny and Ultra-fast Deep Neural Networks for Autonomous Navigation on Nano-UAVs" by L. Lamberti et al., presents significant advancements in the application of convolutional neural networks (CNNs) for autonomous navigation on nano-sized unmanned aerial vehicles (UAVs). The following provides a detailed overview of the methodologies, results, and implications of this work.

Introduction and Context

The authors address the pressing need for efficient, real-time navigation solutions for nano-UAVs, which operate under stringent memory and computational constraints. The primary objective is to develop CNNs that are both compact and capable of high frame rates, suitable for deployment on limited-resource platforms like the Greenwaves Technologies (GWT) GAP8 System-on-Chip (SoC).

Contributions and Methodology

Key contributions of this paper include:

  1. Development of a New Dataset: The authors generated a novel dataset comprising 66,000 images with unified labels for collision avoidance and steering, specifically tailored for training CNNs in autonomous navigation tasks on nano-UAVs.
  2. Design of Efficient CNN Architectures: The proposed CNNs demonstrate significant reductions in memory footprint and computational complexity. The authors explored various architecture options, including residual blocks (RB), depthwise and pointwise (D+P) convolutions, and inverted residuals with linear bottlenecks (IRLB), inspired by MobileNet v1 and v2.
  3. Ultra-low-power Implementation: With the aid of advanced quantization techniques, they converted CNNs to 8-bit fixed-point representations and employed deployment tools like DORY to optimize inference on the GAP8 SoC.

Results and Performance Evaluation

Numerical Results

The experimental results highlight substantial improvements over the baseline PULP-Dronet v2:

  • Memory Efficiency: The distilled CNNs reduce memory footprint by up to 168×, with the smallest model, Tiny-PULP-Dronet v3, requiring only 2.9 KB of memory.
  • Inference Speed: The Tiny-PULP-Dronet v3 achieves a maximum inference rate of 139 fps, a 7.3× increase compared to the 19 fps of PULP-Dronet v2.

In-Field Testing

The field tests conducted in a controlled environment demonstrate the efficacy of the proposed models:

  • Navigation Success Rate: Tiny-PULP-Dronet v3 achieved a 100% success rate in navigating a challenging U-shaped path with static obstacles at a target speed of 0.5 m/s, outperforming the PULP-Dronet v2, which consistently failed.
  • Dynamic Obstacle Avoidance: In scenarios involving dynamic obstacles, PULP-Dronet v3 demonstrated a 60% success rate at 1.5 m/s, indicating robust dynamic obstacle avoidance capabilities.

Implications and Future Directions

Practical Implications

The proposed CNNs offer significant benefits for real-time autonomous navigation:

  • Enhanced Performance: The considerable reduction in memory and computational requirements opens avenues for deploying additional AI tasks concurrently on nano-UAVs.
  • Energy Efficiency: With energy consumption as low as 0.4 mJ per inference, the solutions are highly suitable for ultra-low-power applications, extending the operational lifetime of battery-powered UAVs.

Theoretical Implications

From a theoretical standpoint, the paper reins in advanced deep learning techniques:

  • Modularity and Scalability: The modular approach to CNN architecture design, using blocks from MobileNet v1 and v2, provides a versatile framework for other resource-constrained applications.
  • Dataset Contributions: The new dataset tailored for nano-UAV navigation fosters further research and development in this domain, addressing the gaps left by previous datasets.

Conclusion

The research by L. Lamberti et al. presents a significant advancement in the field of autonomous nano-UAV navigation. The introduction of highly compact and efficient CNN architectures, combined with a robust dataset, sets a new benchmark in this area. Future work could explore further quantization methods and more sophisticated architectures to push the boundaries of autonomous navigation on resource-constrained devices. This paper offers a comprehensive foundation for both practical deployments and future theoretical explorations in autonomous UAV systems.

Youtube Logo Streamline Icon: https://streamlinehq.com