Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XingGAN for Person Image Generation (2007.09278v1)

Published 17 Jul 2020 in cs.CV, cs.LG, and eess.IV

Abstract: We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN) for person image generation tasks, i.e., translating the pose of a given person to a desired one. The proposed Xing generator consists of two generation branches that model the person's appearance and shape information, respectively. Moreover, we propose two novel blocks to effectively transfer and update the person's shape and appearance embeddings in a crossing way to mutually improve each other, which has not been considered by any other existing GAN-based image generation work. Extensive experiments on two challenging datasets, i.e., Market-1501 and DeepFashion, demonstrate that the proposed XingGAN advances the state-of-the-art performance both in terms of objective quantitative scores and subjective visual realness. The source code and trained models are available at https://github.com/Ha0Tang/XingGAN.

Citations (163)

Summary

  • The paper introduces a dual-branch GAN architecture that separately models shape and appearance for person image transformation.
  • The study leverages a co-attention fusion module for enhanced feature integration, improving image quality and pose accuracy.
  • Experiments on Market-1501 and DeepFashion demonstrate superior performance with improved SSIM, Inception Score, and PCKh metrics.

XingGAN for Person Image Generation: An Expert Overview

The paper presents a novel generative model, XingGAN (CrossingGAN), specifically designed for person image generation tasks, particularly transforming a given person's pose from an original state to a target pose. This task has notable applications in person image and video generation as well as in person re-identification. The methodology proposed by XingGAN involves a sophisticated architecture that includes two complementary generation branches which separately model appearance and shape information of the person.

At a high level, the Xing generator within the architecture is structured into two key branches: Shape-guided Appearance-based generation (SA) and Appearance-guided Shape-based generation (AS). These branches capture and enhance the respective shape and appearance attributes of images. Unique to this work, the paper proposes interdependent processing blocks called SA and AS blocks. These blocks are designed to bridge and transfer information between shape and appearance representations effectively, leveraging a crossing mechanism to accomplish a richer image synthesis process. This strategic approach delineates the research from previous models which considered these elements separately or in a one-way interaction.

In evaluating the performance of XingGAN, the paper provides a comprehensive set of experiments conducted on publicly available datasets, specifically Market-1501 and DeepFashion. These datasets are chosen due to their challenging nature and focus on providing realistic pose transformations. The studies showcase XingGAN's supremacy in generating high-quality images when measured by several key metrics, including SSIM, Inception Score, and PCKh. This performance outlines improvements in both the fidelity of generated images and their alignment with intended poses.

The model's architecture incorporates two discriminators aimed at ensuring the quality and realism of the images. The appearance-guided discriminator scrutinizes the generated images for their adherence to the appearance of input data whereas the shape-guided discriminator ensures pose consistency. This dual-discriminator strategy acts as a robust mechanism for enhancing generator capabilities within the adversarial framework, a technique congruent with modern GAN architectures.

The authors argue the dual benefits of their approach appear in both qualitative and quantitative metrics. Visual inspections highlight that XingGAN produces realistic images with fewer artifacts compared to existing methods. The human evaluation concurs, suggesting higher perceived authenticity of images produced by XingGAN compared to baseline methods. Additionally, quantifiable improvements are observed in metrics designed to measure image quality and accuracy with respect to ground truth data.

One of the methodological innovations worth noting is the co-attention fusion module. This module selectively assimilates elements from intermediate generation results while incorporating input images, thereby enriching final outputs with salient features from different stages of the generation process. Its introduction is key to the model's consolidation strategy, providing selective attention mechanisms which enhance output realism and consistency.

Looking ahead, the paper sets the stage for further exploration in generative models through its insightful contributions. Specifically, the crossing methods and co-attention mechanism introduce novel pathways for integrating multiple domain representations, holding promise for the synthesis tasks beyond person image generation, such as multi-view or scene generation models.

In summary, XingGAN represents a significant step forward in generative adversarial networks, focusing on improved and refined person image transformations. By integrating innovative crossing blocks and co-attention elements, the authors provide a practical solution addressing limitations of prior methods, with performance substantiated by experimental results from challenging datasets. This work extends the frontier in GAN-based generation while suggesting alternative methodologies and architectures for future research in the domain.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com