Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Reconstruction of Face Images from Deep Face Templates

Published 2 Mar 2017 in cs.CV | (1703.00832v4)

Abstract: State-of-the-art face recognition systems are based on deep (convolutional) neural networks. Therefore, it is imperative to determine to what extent face templates derived from deep networks can be inverted to obtain the original face image. In this paper, we study the vulnerabilities of a state-of-the-art face recognition system based on template reconstruction attack. We propose a neighborly de-convolutional neural network (\textit{NbNet}) to reconstruct face images from their deep templates. In our experiments, we assumed that no knowledge about the target subject and the deep network are available. To train the \textit{NbNet} reconstruction models, we augmented two benchmark face datasets (VGG-Face and Multi-PIE) with a large collection of images synthesized using a face generator. The proposed reconstruction was evaluated using type-I (comparing the reconstructed images against the original face images used to generate the deep template) and type-II (comparing the reconstructed images against a different face image of the same subject) attacks. Given the images reconstructed from \textit{NbNets}, we show that for verification, we achieve TAR of 95.20\% (58.05\%) on LFW under type-I (type-II) attacks @ FAR of 0.1\%. Besides, 96.58\% (92.84\%) of the images reconstruction from templates of partition \textit{fa} (\textit{fb}) can be identified from partition \textit{fa} in color FERET. Our study demonstrates the need to secure deep templates in face recognition systems.

Citations (169)

Summary

  • The paper introduces NbNet, a novel de-convolutional neural network capable of successfully reconstructing face images from deep face templates.
  • Experiments demonstrate that NbNet achieves high success rates in both verification and identification attacks, revealing significant privacy risks in current systems.
  • The research underscores critical security flaws in deep template-based face recognition and highlights the urgent need for enhanced template protection and anti-spoofing measures.

Analyzing Vulnerabilities in Deep Face Recognition Systems via Template Reconstruction

The paper "On the Reconstruction of Face Images from Deep Face Templates" presents an incisive exploration of potential vulnerabilities in state-of-the-art face recognition systems. The research is pivotal in examining the risks associated with template reconstruction attacks, where the adversary attempts to reverse-engineer the original face images from the stored deep templates of a face recognition system.

Core Contributions

This study's primary contribution is the development of the Neighborly De-convolutional Neural Network (NbNet), a novel approach to reconstructing face images from deep templates. Unlike conventional models that utilize typical de-convolution blocks, NbNet uses neighborly de-convolution blocks (NbBlocks). These components effectively reduce noise and repeated channels in reconstructions by learning from neighboring channels within the same block, thereby enhancing the reconstruction's detail quality.

Key innovations also include a comprehensive training strategy for NbNet, utilizing public domain images synthesized via a deep convolutional generative adversarial network (DCGAN). By augmenting existing datasets, the approach ensures that the reconstruction model can generalize well without requiring subject-specific data from the target system.

Experimental Scope and Results

The study's empirical evaluation focuses on both verification and identification scenarios, assessing the robustness of deep template-based systems under template reconstruction attacks.

  • Verification Attacks: Two types were analyzed: Type-I, where reconstructed images are compared to the original image used for template creation, and Type-II, which compares reconstructed images against different images of the same subject. The experimental results demonstrated that NbNet significantly outperforms traditional models, such as the RBF regression-based method, in launching successful reconstruction attacks. For instance, the proposed models achieved True Accept Rate (TAR) values of 95.20% under Type-I attacks on LFW at a False Accept Rate (FAR) of 0.1%, underscoring the potential threat posed by template reconstruction.
  • Identification Task: Utilizing the color FERET dataset, the study highlighted severe privacy risks, where the NbNets achieved a rank-one identification rate of 96.58% in a Type-I attack scenario, with partition fa serving as both the gallery and probe. Furthermore, the performance in Type-II attacks also significantly surpassed that of models trained on un-augmented datasets.

Implications and Future Directions

The findings illustrate critical security and privacy flaws in contemporary deep template-based face recognition systems, emphasizing the urgent need for robust template protection methods. Future developments could involve enhancing template protection by integrating user-specific randomness into deep networks or improving anti-spoofing measures to further safeguard against reconstruction attacks.

Potential avenues for advancing reconstruction techniques include incorporating holistic content understanding and constructing more efficient network architectures under the NbNet framework, which remains ripe for exploration.

In summary, this research is a crucial step in probing the security landscape of modern biometric systems, particularly those leveraging deep learning, and offers valuable insights into addressing inherent vulnerabilities in template security.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.