Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Blind Video Deflickering by Neural Filtering with a Flawed Atlas (2303.08120v1)

Published 14 Mar 2023 in cs.CV

Abstract: Many videos contain flickering artifacts. Common causes of flicker include video processing algorithms, video generation algorithms, and capturing videos under specific situations. Prior work usually requires specific guidance such as the flickering frequency, manual annotations, or extra consistent videos to remove the flicker. In this work, we propose a general flicker removal framework that only receives a single flickering video as input without additional guidance. Since it is blind to a specific flickering type or guidance, we name this "blind deflickering." The core of our approach is utilizing the neural atlas in cooperation with a neural filtering strategy. The neural atlas is a unified representation for all frames in a video that provides temporal consistency guidance but is flawed in many cases. To this end, a neural network is trained to mimic a filter to learn the consistent features (e.g., color, brightness) and avoid introducing the artifacts in the atlas. To validate our method, we construct a dataset that contains diverse real-world flickering videos. Extensive experiments show that our method achieves satisfying deflickering performance and even outperforms baselines that use extra guidance on a public benchmark.

Citations (23)

Summary

  • The paper presents a novel blind deflickering method that uses a neural atlas to achieve temporal consistency from a single flickering video input.
  • The paper leverages a neural filtering network to refine a flawed atlas, effectively reducing temporal inconsistencies with flow-based regularization.
  • The paper validates its approach on diverse real and synthetic datasets, demonstrating superior flicker reduction compared to baseline methods.

Blind Video Deflickering by Neural Filtering with a Flawed Atlas

The paper presents a novel approach to address deflickering in videos, termed "blind deflickering," which is designed to remove flickering artifacts without relying on specific flickering patterns or external guidance. The authors propose using a neural atlas in conjunction with a neural filtering strategy to process input videos, creating a general-purpose solution for deflickering tasks.

Core Contributions

  1. Blind Deflickering Framework: The authors introduce a method that requires only a single flickering video without additional input or guidance regarding flickering types. This approach makes use of a neural atlas, a unified video representation that ensures temporal consistency across all frames.
  2. Neural Atlas Utilization: The neural atlas facilitates long-term consistency by mapping all video pixels to a shared space, allowing for consistent frame sampling. This atlas, however, is flawed due to limitations in accurately capturing dynamic objects or large motions.
  3. Neural Filtering Strategy: To counteract the inaccuracies inherent in the flawed atlas, a neural network filter is trained. The network learns to mimic consistent features while rejecting artifacts, enhancing the output's temporal coherence.
  4. Construction of a Comprehensive Dataset: The authors provide a dataset featuring diverse real-world and synthetic flickering videos. This resource serves to evaluate and validate the blind deflickering performance of their approach against various flicker types.

Key Methodological Insights

The methodology hinges on two primary components: the generation of a flawed atlas and its subsequent refinement. The neural atlas acts as a shared video representation, captured by training a mapping and atlas network to predict pixel positions within an atlas space. A neural filtering network further refines these atlas-based frames to maintain temporal consistency and eliminate flickering artifacts.

For local refinement, a secondary network addresses remaining inconsistencies through a flow-based regularization approach, ensuring that results are not only globally but also locally coherent.

Experimental Evaluation

The paper evaluates the proposed model across several datasets, demonstrating superior performance compared to baseline methods. Quantitative assessments leverage metrics like warping error, showing significant reductions in temporal inconsistencies. Additionally, comparative studies with human experts highlight the competitive performance of the proposed automated deflickering method, using expert-processed videos as a benchmark.

Implications and Future Directions

Practically, the system's utility spans various video categories, including old films and computationally generated content. Theoretically, this research broadens the understanding and application of neural filtering in video consistency problems. Future developments could explore further optimization of neural atlas methods and expand the applicability of deflickering techniques to more complex video generation tasks or other areas plagued by temporal inconsistencies.

Overall, the paper provides a detailed account of how neural networks can be effectively employed to address video flickering, offering a robust solution without requiring detailed flickering-specific inputs. The authors present a significant step forward in the field of video processing and computational photography, aiming to improve video quality and user experience in diverse applications.