Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Sparse Masks for Diffusion-based Image Inpainting (2110.02636v4)

Published 6 Oct 2021 in eess.IV, cs.CV, and cs.LG

Abstract: Diffusion-based inpainting is a powerful tool for the reconstruction of images from sparse data. Its quality strongly depends on the choice of known data. Optimising their spatial location -- the inpainting mask -- is challenging. A commonly used tool for this task are stochastic optimisation strategies. However, they are slow as they compute multiple inpainting results. We provide a remedy in terms of a learned mask generation model. By emulating the complete inpainting pipeline with two networks for mask generation and neural surrogate inpainting, we obtain a model for highly efficient adaptive mask generation. Experiments indicate that our model can achieve competitive quality with an acceleration by as much as four orders of magnitude. Our findings serve as a basis for making diffusion-based inpainting more attractive for applications such as image compression, where fast encoding is highly desirable.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tobias Alt (13 papers)
  2. Pascal Peter (24 papers)
  3. Joachim Weickert (48 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.