Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 180 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

NTIRE 2025 Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results (2504.12711v2)

Published 17 Apr 2025 in cs.CV, cs.AI, and eess.IV

Abstract: This paper reviews the NTIRE 2025 Challenge on Day and Night Raindrop Removal for Dual-Focused Images. This challenge received a wide range of impressive solutions, which are developed and evaluated using our collected real-world Raindrop Clarity dataset. Unlike existing deraining datasets, our Raindrop Clarity dataset is more diverse and challenging in degradation types and contents, which includes day raindrop-focused, day background-focused, night raindrop-focused, and night background-focused degradations. This dataset is divided into three subsets for competition: 14,139 images for training, 240 images for validation, and 731 images for testing. The primary objective of this challenge is to establish a new and powerful benchmark for the task of removing raindrops under varying lighting and focus conditions. There are a total of 361 participants in the competition, and 32 teams submitting valid solutions and fact sheets for the final testing phase. These submissions achieved state-of-the-art (SOTA) performance on the Raindrop Clarity dataset. The project can be found at https://lixinustc.github.io/CVPR-NTIRE2025-RainDrop-Competition.github.io/.

Summary

A Comprehensive Analysis of the NTIRE 2025 Challenge on Day and Night Raindrop Removal for Dual-Focused Images

The NTIRE 2025 Challenge addresses the task of day and night raindrop removal for dual-focused images, a crucial endeavor in the domain of image restoration, particularly aimed at enhancing image clarity for applications like autonomous driving and video surveillance. This challenge fosters significant advancements in the field of image deraining by confronting the dual challenges of raindrop artifacts and focus-related degradations in diverse lighting conditions. Participants utilized the novel Raindrop Clarity dataset, setting a new benchmark in this complex image restoration domain.

Dataset and Benchmarking

The Raindrop Clarity dataset, developed specifically for this competition, encompasses a wide range of degradation scenarios, with images encompassing day raindrop-focused, day background-focused, night raindrop-focused, and night background-focused conditions. It comprises 14,139 images for training, 240 for validation, and 731 for testing, providing a diverse and robust data foundation for participants to develop and evaluate their image deraining algorithms. The performance of the models was measured using PSNR, SSIM, and LPIPS, providing a comprehensive assessment of restoration fidelity and perceptual quality.

Methods and Approaches

The research community’s response to this challenge was robust, with 361 participants developing a variety of innovative approaches that contributed to the state-of-the-art in raindrop removal for dual-focused images. A total of 32 teams submitted valid solutions for evaluation. The strategies employed can be broadly categorized into single-model architectures, such as enhanced versions of Restormer and diffusion-based models, and hybrid approaches like those integrating machine learning with adaptive attention mechanisms. Some notable methodologies include:

  • STRRNet by Miracle Team: Introduced a semantics-guided two-stage framework aimed at optimizing raindrop removal performance across diverse lighting conditions by leveraging a text-embedded guidance module.
  • Restormer-based Methods: Several teams utilized variations of the Restormer model, emphasizing global-local feature aggregation to enhance clarity while preserving delicate scene details.
  • Transformer-Based Techniques: Models like Histoformer and MSDT leveraged transformer architectures to capture long-range dependencies effectively, addressing both raindrop occlusions and defocus-induced blurring.
  • Data Augmentation and Ensemble Methods: Techniques such as multi-scale data augmentation and ensemble learning were applied to improve robustness and generalization, highlighting the importance of diverse data processing strategies.

These methodologies collectively addressed the multi-dimensional challenges of this task, offering insights into both specific architectural innovations and broader strategies for improving model performance under varied environmental conditions.

Results and Performance

The challenge's results underscored the effectiveness of advanced deep learning models in addressing dual degradation scenarios. The top performers demonstrated significant improvements over previous techniques, achieving new benchmarks in PSNR and SSIM metrics, indicative of enhanced perceptual quality and structural fidelity. Key results from the challenge include:

  • Miracle Team achieved the highest overall score by implementing a two-stage architecture that successfully incorporated semantic guidance to adaptively handle various image conditions, thereby setting a new standard in the domain.
  • EntroVision and IIRLab Teams also demonstrated outstanding performance by integrating transformer-based models with specially designed data augmentation and training strategies to enhance raindrop removal efficiency and effectiveness.

Implications and Future Directions

The NTIRE 2025 Challenge's outcomes have profound implications for the future of real-world image restoration tasks. The solutions developed underscore the potential of modern deep learning architectures to effectively disentangle complex image degradation scenarios. Furthermore, they highlight the continued importance of benchmark datasets in driving innovation and standard-setting within the field.

Looking forward, the challenge reveals several key areas for future exploration and improvement: refining attention mechanisms to better capture and correct localized degradations, enhancing interpretability and robustness of models across larger datasets, and integrating multi-modal data to improve contextual understanding and adaptation to dynamic weather conditions. The challenge's focus on dual-focused images is indicative of a broader trend towards addressing more complex, real-world scenarios, thus paving the way for future advancements in the intersection of computer vision and autonomous systems technology.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.