Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bringing Old Photos Back to Life

Published 20 Apr 2020 in cs.CV, cs.GR, and eess.IV | (2004.09484v1)

Abstract: We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces. And the translation between these two latent spaces is learned with synthetic paired data. This translation generalizes well to real photos because the domain gap is closed in the compact latent space. Besides, to address multiple degradations mixed in one old photo, we design a global branch with a partial nonlocal block targeting to the structured defects, such as scratches and dust spots, and a local branch targeting to the unstructured defects, such as noises and blurriness. Two branches are fused in the latent space, leading to improved capability to restore old photos from multiple defects. The proposed method outperforms state-of-the-art methods in terms of visual quality for old photos restoration.

Citations (191)

Summary

  • The paper introduces a triplet domain translation network that leverages VAEs to align synthetic and real domains for effective photo restoration.
  • It utilizes a dual-branch network to address mixed degradation by targeting both unstructured and structured defects.
  • Quantitative metrics like PSNR, SSIM, LPIPS, and FID, along with user studies, confirm the method’s superior restoration performance.

Overview of "Bringing Old Photos Back to Life"

The paper "Bringing Old Photos Back to Life" presents an innovative approach to the restoration of heavily degraded historical photographs using deep learning techniques. The authors address the intrinsic challenges posed by the diverse types of degradation found in old photographs, such as scratches, blurriness, noise, and discoloration. To overcome these issues, they introduce a triplet domain translation network design that effectively narrows the domain gap between synthetic and real photos, ensuring a high-quality restoration.

Methodology

Old photo restoration is formulated as a complex triplet domain translation problem involving real photos, synthetic images, and corresponding ground truth clean images. The approach involves transforming images from these three domains into latent spaces using variational autoencoders (VAEs). A key innovation lies in aligning the latent spaces of real and synthetic images, allowing the learned mappings from the synthetic to the clean domain to generalize well to real photographs.

  1. Domain Translation via Latent Space:
    • Two VAEs are trained independently: one for old and synthetic photos (aligned into a shared latent space) and one for clean images.
    • The learned translation in the latent space closes domain gaps and facilitates restoration of real photos by leveraging mappings from synthetic to ground truth domains.
  2. Handling Mixed Degradation:
    • Recognizing different restoration strategies for various defects, the authors design a dual-branch network.
    • A local branch addresses unstructured defects (e.g., noise, blur), while a global branch with a partial nonlocal block handles structured deficiencies(like scratches).

Numerical and Qualitative Results

The proposed model demonstrates superior performance over state-of-the-art methods. Its outcomes are quantitatively affirmed through metrics like PSNR, SSIM, LPIPS, and FID on synthetic images and validated through qualitative assessments on actual old photos. The model exhibits a significant capacity to recover fine details while also enhancing overall image quality, as evidenced by positive results in user studies.

Implications and Future Directions

This paper suggests potential improvements in AI historical photograph restoration, offering effective handling of complex mixed degradation—a task where traditional methods fall short. The paper implicitly proposes future exploration in enhancing such VAEs for broader applications in image processing where domain gaps impede direct application of learned models. Furthermore, the study of more sophisticated degradation models and real-world datasets could bridge existing limitations, potentially supporting wider adoption in commercial and research settings.

As AI continues to advance, this work enriches our understanding of combining deep learning and digital restoration to preserve historical visual artifacts. It invites further research into adaptive learning systems capable of seamlessly transitioning between synthetic and real-world data to address diverse degradation types robustly.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.