Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Generation of Face Images from Sketches (2006.01047v2)

Published 1 Jun 2020 in cs.GR and cs.CV

Abstract: Recent deep image-to-image translation techniques allow fast generation of face images from freehand sketches. However, existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input. To address this issue, our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. We take a local-to-global approach. We first learn feature embeddings of key face components, and push corresponding parts of input sketches towards underlying component manifolds defined by the feature vectors of face component samples. We also propose another deep neural network to learn the mapping from the embedded component features to realistic images with multi-channel feature maps as intermediate results to improve the information flow. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches. Our tool is easy to use even for non-artists, while still supporting fine-grained control of shape details. Both qualitative and quantitative evaluations show the superior generation ability of our system to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shu-Yu Chen (6 papers)
  2. Wanchao Su (9 papers)
  3. Lin Gao (119 papers)
  4. Shihong Xia (21 papers)
  5. Hongbo Fu (67 papers)
Citations (79)

Summary

DeepFaceDrawing: Deep Generation of Face Images from Sketches

The paper "DeepFaceDrawing: Deep Generation of Face Images from Sketches" addresses a significant challenge in the field of sketch-to-image synthesis: generating high-quality face images from rough, freehand sketches. The authors propose a novel framework that leverages deep learning techniques to overcome the limitations of existing methods, which often require detailed and precise input sketches.

Methodology

The approach introduced by the authors is a multi-stage pipeline that transforms sketches into realistic face images by implicitly modeling the shape space of plausible face images. This is achieved through a local-to-global framework involving three key modules:

  1. Component Embedding (CE): The system begins by decomposing a face sketch into key components such as eyes, nose, and mouth. Each component is processed by a dedicated auto-encoder, which learns feature embeddings. These embeddings represent the underlying component manifolds, offering a structured way to interpret and refine sketch inputs.
  2. Feature Mapping (FM): The component feature vectors obtained from the CE module are mapped to multi-channel feature maps. These maps improve information flow and allow for more nuanced image synthesis by providing richer data representations than simple sketches.
  3. Image Synthesis (IS): The synthesized feature maps are combined and input into a conditional generative adversarial network (GAN), which outputs high-resolution face images. Using GANs in this context ensures that the generated images are realistic and consistent with the component embeddings.

The authors emphasize the flexibility and robustness of their method, which treats input sketches as soft constraints rather than hard ones, allowing for generation despite potential inaccuracies or incompleteness in sketch input.

Results and User Study

Both quantitative and qualitative evaluations demonstrate the superiority of the proposed DeepFaceDrawing system compared to existing sketch-to-image translation solutions. The system is shown to produce visually pleasing images with high fidelity to the original sketch's intent. A user paper corroborates these findings, with users reporting high levels of usability and expressive power, even for non-artists.

Implications and Future Work

The implications of this research are far-reaching, particularly in applications such as criminal investigation and digital character design, where quick and reliable face synthesis from sketches is valuable. Theoretically, the paper advances understanding in local-to-global image synthesis frameworks, offering insights into the structured handling of component-level sketch data in generative models.

Potential future developments could explore further enhancing the realism of synthesized images by integrating additional contextual information, such as color or texture. Additionally, expanding the dataset or leveraging unsupervised learning could improve the accuracy and applicability of the technique to a broader range of sketch inputs, including non-human faces or stylistically diverse sketches.

In conclusion, "DeepFaceDrawing" contributes a sophisticated and efficient solution to the sketch-to-image translation challenge, enhancing both the theoretical landscape and practical capabilities in AI-driven image synthesis.

Youtube Logo Streamline Icon: https://streamlinehq.com