MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network (2010.01424v1)
Abstract: We present Mask-guided Generative Adversarial Network (MagGAN) for high-resolution face attribute editing, in which semantic facial masks from a pre-trained face parser are used to guide the fine-grained image editing process. With the introduction of a mask-guided reconstruction loss, MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes, while preserving the attribute-irrelevant regions (e.g., hat, scarf for modification `To Bald'). Further, a novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator. In addition, a multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution ($1024 \times 1024$) face editing. Experiments on the CelebA benchmark show that the proposed method significantly outperforms prior state-of-the-art approaches in terms of both image quality and editing performance.
- Yi Wei (60 papers)
- Zhe Gan (135 papers)
- Wenbo Li (115 papers)
- Siwei Lyu (125 papers)
- Ming-Ching Chang (45 papers)
- Lei Zhang (1689 papers)
- Jianfeng Gao (344 papers)
- Pengchuan Zhang (58 papers)