- The paper introduces an efficient end-to-end network that unifies traffic object detection, drivable area segmentation, and lane detection.
- It employs an EfficientNet-B3 backbone paired with a Bidirectional Feature Pyramid Network to boost multi-scale feature fusion and improve detection accuracy.
- The optimized training strategy, featuring automated anchor customization and combined loss functions, achieves notable mAP and mIoU results on BDD100K.
HybridNets: End-to-End Perception Network in Multi-Tasking
HybridNets presents a sophisticated end-to-end network architecture designed for multi-tasking in autonomous driving. Recognizing the computational constraints inherent in embedded systems, this paper explores the unification of multiple perception tasks—traffic object detection, drivable area segmentation, and lane detection—into a singular, efficient network. The proposed architecture exemplifies the use of advanced components including an EfficientNet backbone and Bidirectional Feature Pyramid Network (BiFPN) to achieve superior performance.
Key Contributions
This work advances the field by introducing several technical innovations:
- Efficient Segmentation and Prediction Networks: Using a weighted bidirectional feature network, the authors have developed efficient segmentation head and box/class prediction networks.
- Automated Anchor Customization: A unique contribution is the automatically customized anchor generation for each level within the weighted bidirectional feature network, improving detection accuracy across diverse datasets.
- Optimized Training Approach: The authors propose a new training loss function and strategy that optimizes the network for multi-tasking scenarios, balancing the performance across diverse tasks.
Experimental Results
HybridNets shows outstanding results on the Berkeley DeepDrive Dataset (BDD100K). The network achieves a mean Average Precision (mAP) of 77.3% for traffic object detection, and a mean Intersection Over Union (mIoU) of 31.6% for lane detection with 12.83 million parameters and 15.6 billion FLOPs. By merging object detection, drivable area segmentation, and lane detection, HybridNets demonstrates superior capabilities in handling real-time visual perception tasks compared to existing multi-task networks like YOLOP, offering improvements in recall and mAP metrics.
Methodology
The architecture of HybridNets employs a shared encoder with EfficientNet-B3 as the backbone and a BiFPN to facilitate multi-scale feature fusion. This pairing enhances the effective utilization of spatial features across tasks. The detection head incorporates k-means initialized anchors, which enhances precision in bound box predictions. The segmentation head leverages both Focal and Tversky losses to balance pixel classification, addressing challenges in data imbalances common to small object segmentation.
Implications and Future Work
The implications of HybridNets are substantial for autonomous driving, where real-time processing on constrained hardware is critical. The architecture's reduced FLOPs while maintaining high accuracy positions it as a robust solution for embedded systems demanding real-time analysis.
Future directions may include extending HybridNets to tackle more complex perception challenges such as 3-D object detection and integrating additional perceptual tasks while maintaining or improving computational efficiency. Such expansions would enhance the utility of HybridNets in increasingly complex driving scenarios, further pushing the boundaries of what can be achieved with unified perception networks.
This research underscores the importance of architectural optimization in developing efficient, scalable, and practically deployable AI systems in the domain of autonomous driving.