Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Biphasic Face Photo-Sketch Synthesis via Semantic-Driven Generative Adversarial Network with Graph Representation Learning (2201.01592v2)

Published 5 Jan 2022 in cs.CV

Abstract: Biphasic face photo-sketch synthesis has significant practical value in wide-ranging fields such as digital entertainment and law enforcement. Previous approaches directly generate the photo-sketch in a global view, they always suffer from the low quality of sketches and complex photo variations, leading to unnatural and low-fidelity results. In this paper, we propose a novel Semantic-Driven Generative Adversarial Network to address the above issues, cooperating with Graph Representation Learning. Considering that human faces have distinct spatial structures, we first inject class-wise semantic layouts into the generator to provide style-based spatial information for synthesized face photos and sketches. Additionally, to enhance the authenticity of details in generated faces, we construct two types of representational graphs via semantic parsing maps upon input faces, dubbed the IntrA-class Semantic Graph (IASG) and the InteR-class Structure Graph (IRSG). Specifically, the IASG effectively models the intra-class semantic correlations of each facial semantic component, thus producing realistic facial details. To preserve the generated faces being more structure-coordinated, the IRSG models inter-class structural relations among every facial component by graph representation learning. To further enhance the perceptual quality of synthesized images, we present a biphasic interactive cycle training strategy by fully taking advantage of the multi-level feature consistency between the photo and sketch. Extensive experiments demonstrate that our method outperforms the state-of-the-art competitors on the CUFS and CUFSF datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xingqun Qi (21 papers)
  2. Muyi Sun (21 papers)
  3. Zijian Wang (99 papers)
  4. Jiaming Liu (156 papers)
  5. Qi Li (354 papers)
  6. Fang Zhao (44 papers)
  7. Shanghang Zhang (173 papers)
  8. Caifeng Shan (27 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.