Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather (1902.08913v3)

Published 24 Feb 2019 in cs.CV

Abstract: The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information in good environmental conditions, they fail in adverse weather where the sensory streams can be asymmetrically distorted. These rare "edge-case" scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this challenge we present a novel multimodal dataset acquired in over 10,000km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar, and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy. We validate the proposed method, trained on clean data, on our extensive validation dataset. Code and data are available here https://github.com/princeton-computational-imaging/SeeingThroughFog.

Deep Multimodal Sensor Fusion for Autonomous Vehicles in Adverse Weather

The fusion of multimodal sensor streams, including camera, lidar, and radar measurements, is crucial for enhancing the object detection capabilities of autonomous vehicles. This paper introduces a deep multimodal sensor fusion method that addresses the challenge of adverse weather conditions, which often lead to asymmetrically distorted sensor streams. The researchers propose a novel deep fusion network that adapts to these distortions by leveraging measurement entropy, thus allowing robust fusion in the absence of a comprehensive labeled training dataset for extreme weather scenarios.

Key Contributions

  1. Introduction of a Novel Dataset: The research introduces a unique multimodal dataset, obtained from over 10,000 km of driving in northern Europe, specifically designed to represent adverse weather conditions. This dataset is notable for its inclusion of 100k labels covering lidar, camera, radar, and gated NIR sensors, which is a significant advancement over existing datasets biased towards good weather conditions.
  2. Deep Fusion Network: A key innovation is the proposed deep fusion network that performs adaptive, single-shot feature fusion driven by measurement entropy. This approach departs from traditional proposal-level fusion and provides a mechanism to robustly handle asymmetrical sensor distortions by adapting to the variability in sensor data reliability.
  3. Performance Validation: The proposed method is validated on a comprehensive validation set, demonstrating a performance improvement of over 8% AP in challenging weather scenarios such as light fog, dense fog, snow, and clear conditions. Crucially, this network achieves this level of performance while being trained only on clean data, without the need for extensive training data depicting all possible asymmetric distortions.

Implications and Future Directions

This work has significant theoretical and practical implications for the field of autonomous systems. The adaptive fusion strategy is particularly noteworthy because it allows an autonomous vehicle to maintain robust detection capabilities in conditions that are not well represented in training datasets. This adaptability is essential for achieving reliable autonomy in diverse geographical regions and weather conditions, potentially reducing the risk of accidents caused by sensor failure or misinterpretation.

In future research, it would be valuable to explore end-to-end models that integrate both fusion and decision-making processes, possibly incorporating failure detection mechanisms and adaptive sensor control strategies. Such developments could further enhance the safety and robustness of autonomous vehicles by dynamically adjusting sensor parameters based on the real-time analysis of weather-induced measurement uncertainties.

Conclusion

The paper presents a rigorous approach to enhancing sensor fusion for autonomous vehicles under adverse weather conditions. By focusing on the practical challenges posed by asymmetric sensor distortions and addressing these with a deep entropy-driven fusion network, the researchers deliver a robust solution that could significantly improve the reliability of autonomous systems. This research not only contributes to the development of multimodal sensor fusion techniques but also sets the stage for future exploration into more adaptive and intelligent autonomous vehicle technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mario Bijelic (24 papers)
  2. Tobias Gruber (9 papers)
  3. Fahim Mannan (11 papers)
  4. Florian Kraus (11 papers)
  5. Werner Ritter (15 papers)
  6. Klaus Dietmayer (106 papers)
  7. Felix Heide (72 papers)
Citations (38)
Github Logo Streamline Icon: https://streamlinehq.com