Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-Aware Image Inpainting with Learned Semantic Priors (2106.07220v1)

Published 14 Jun 2021 in cs.CV

Abstract: Recent advances in image inpainting have shown impressive results for generating plausible visual details on rather simple backgrounds. However, for complex scenes, it is still challenging to restore reasonable contents as the contextual information within the missing regions tends to be ambiguous. To tackle this problem, we introduce pretext tasks that are semantically meaningful to estimating the missing contents. In particular, we perform knowledge distillation on pretext models and adapt the features to image inpainting. The learned semantic priors ought to be partially invariant between the high-level pretext task and low-level image inpainting, which not only help to understand the global context but also provide structural guidance for the restoration of local textures. Based on the semantic priors, we further propose a context-aware image inpainting model, which adaptively integrates global semantics and local features in a unified image generator. The semantic learner and the image generator are trained in an end-to-end manner. We name the model SPL to highlight its ability to learn and leverage semantic priors. It achieves the state of the art on Places2, CelebA, and Paris StreetView datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wendong Zhang (21 papers)
  2. Junwei Zhu (20 papers)
  3. Ying Tai (88 papers)
  4. Yunbo Wang (43 papers)
  5. Wenqing Chu (16 papers)
  6. Bingbing Ni (95 papers)
  7. Chengjie Wang (178 papers)
  8. Xiaokang Yang (207 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.