Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving generative adversarial network inversion via fine-tuning GAN encoders (2108.10201v4)

Published 23 Aug 2021 in cs.CV and eess.IV

Abstract: Generative adversarial networks (GANs) can synthesize high-quality (HQ) images, and GAN inversion is a technique that discovers how to invert given images back to latent space. While existing methods perform on StyleGAN inversion, they have limited performance and are not generalized to different GANs. To address these issues, we proposed a self-supervised method to pre-train and fine-tune GAN encoders. First, we designed an adaptive block to fit different encoder architectures for inverting diverse GANs. Then we pre-train GAN encoders using synthesized images and emphasize local regions through cropping images. Finally, we fine-tune the pre-trained GAN encoder for inverting real images. Compared with state-of-the-art methods, our method achieved better results that reconstructed high-quality images on mainstream GANs. Our code and pre-trained models are available at: https://github.com/disanda/Deep-GAN-Encoders.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Cheng Yu (62 papers)
  2. Wenmin Wang (9 papers)
  3. Roberto Bugiolacchi (2 papers)
Citations (4)