Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HybridNets: End-to-End Perception Network (2203.09035v1)

Published 17 Mar 2022 in cs.CV and cs.LG

Abstract: End-to-end Network has become increasingly important in multi-tasking. One prominent example of this is the growing significance of a driving perception system in autonomous driving. This paper systematically studies an end-to-end perception network for multi-tasking and proposes several key optimizations to improve accuracy. First, the paper proposes efficient segmentation head and box/class prediction networks based on weighted bidirectional feature network. Second, the paper proposes automatically customized anchor for each level in the weighted bidirectional feature network. Third, the paper proposes an efficient training loss function and training strategy to balance and optimize network. Based on these optimizations, we have developed an end-to-end perception network to perform multi-tasking, including traffic object detection, drivable area segmentation and lane detection simultaneously, called HybridNets, which achieves better accuracy than prior art. In particular, HybridNets achieves 77.3 mean Average Precision on Berkeley DeepDrive Dataset, outperforms lane detection with 31.6 mean Intersection Over Union with 12.83 million parameters and 15.6 billion floating-point operations. In addition, it can perform visual perception tasks in real-time and thus is a practical and accurate solution to the multi-tasking problem. Code is available at https://github.com/datvuthanh/HybridNets.

Citations (72)

Summary

  • The paper introduces an efficient end-to-end network that unifies traffic object detection, drivable area segmentation, and lane detection.
  • It employs an EfficientNet-B3 backbone paired with a Bidirectional Feature Pyramid Network to boost multi-scale feature fusion and improve detection accuracy.
  • The optimized training strategy, featuring automated anchor customization and combined loss functions, achieves notable mAP and mIoU results on BDD100K.

HybridNets: End-to-End Perception Network in Multi-Tasking

HybridNets presents a sophisticated end-to-end network architecture designed for multi-tasking in autonomous driving. Recognizing the computational constraints inherent in embedded systems, this paper explores the unification of multiple perception tasks—traffic object detection, drivable area segmentation, and lane detection—into a singular, efficient network. The proposed architecture exemplifies the use of advanced components including an EfficientNet backbone and Bidirectional Feature Pyramid Network (BiFPN) to achieve superior performance.

Key Contributions

This work advances the field by introducing several technical innovations:

  1. Efficient Segmentation and Prediction Networks: Using a weighted bidirectional feature network, the authors have developed efficient segmentation head and box/class prediction networks.
  2. Automated Anchor Customization: A unique contribution is the automatically customized anchor generation for each level within the weighted bidirectional feature network, improving detection accuracy across diverse datasets.
  3. Optimized Training Approach: The authors propose a new training loss function and strategy that optimizes the network for multi-tasking scenarios, balancing the performance across diverse tasks.

Experimental Results

HybridNets shows outstanding results on the Berkeley DeepDrive Dataset (BDD100K). The network achieves a mean Average Precision (mAP) of 77.3% for traffic object detection, and a mean Intersection Over Union (mIoU) of 31.6% for lane detection with 12.83 million parameters and 15.6 billion FLOPs. By merging object detection, drivable area segmentation, and lane detection, HybridNets demonstrates superior capabilities in handling real-time visual perception tasks compared to existing multi-task networks like YOLOP, offering improvements in recall and mAP metrics.

Methodology

The architecture of HybridNets employs a shared encoder with EfficientNet-B3 as the backbone and a BiFPN to facilitate multi-scale feature fusion. This pairing enhances the effective utilization of spatial features across tasks. The detection head incorporates k-means initialized anchors, which enhances precision in bound box predictions. The segmentation head leverages both Focal and Tversky losses to balance pixel classification, addressing challenges in data imbalances common to small object segmentation.

Implications and Future Work

The implications of HybridNets are substantial for autonomous driving, where real-time processing on constrained hardware is critical. The architecture's reduced FLOPs while maintaining high accuracy positions it as a robust solution for embedded systems demanding real-time analysis.

Future directions may include extending HybridNets to tackle more complex perception challenges such as 3-D object detection and integrating additional perceptual tasks while maintaining or improving computational efficiency. Such expansions would enhance the utility of HybridNets in increasingly complex driving scenarios, further pushing the boundaries of what can be achieved with unified perception networks.

This research underscores the importance of architectural optimization in developing efficient, scalable, and practically deployable AI systems in the domain of autonomous driving.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com