Papers
Topics
Authors
Recent
Search
2000 character limit reached

Detecting Photoshopped Faces by Scripting Photoshop

Published 13 Jun 2019 in cs.CV | (1906.05856v2)

Abstract: Most malicious photo manipulations are created using standard image editing tools, such as Adobe Photoshop. We present a method for detecting one very popular Photoshop manipulation -- image warping applied to human faces -- using a model trained entirely using fake images that were automatically generated by scripting Photoshop itself. We show that our model outperforms humans at the task of recognizing manipulated images, can predict the specific location of edits, and in some cases can be used to "undo" a manipulation to reconstruct the original, unedited image. We demonstrate that the system can be successfully applied to real, artist-created image manipulations.

Citations (127)

Summary

  • The paper demonstrates that combining a global CNN classifier with a local warp prediction network achieves up to 97.1% accuracy in detecting manipulated facial images.
  • The methodology leverages automated data generation by scripting Photoshop’s Face-Aware Liquify tool to create a large, diverse dataset of altered faces.
  • The approach generalizes to edits from alternative tools, offering a robust forensic method to counter image-based misinformation in media.

Detecting Photoshopped Faces by Scripting Photoshop: An Expert Overview

The paper "Detecting Photoshopped Faces by Scripting Photoshop" addresses the pervasive challenge of detecting subtle face manipulations in digital imagery, particularly those performed using Adobe Photoshop's image warping tools. This problem is significant because such manipulations can lead to misrepresentations in media, affecting public perception and potentially causing socio-cultural harm. The research effort navigates this issue by leveraging deep learning architectures to discern and reverse facial warps, thereby providing potential countermeasures against image-based misinformation.

Methodological Approach

Central to this study is the novel approach of generating a comprehensive dataset of manipulated images. This was achieved by scripting Adobe Photoshop to automatically apply its Face-Aware Liquify (FAL) tool to a diverse set of facial images. These scripted modifications enable the creation of a vast corpus of training data, free from the labor-intensive manual curation typically required. By focusing on image warping—a popular manipulation for adjustments in facial symmetry and expression—the authors specifically isolate a class of edits often elusive to human detection.

The core of the method involves two convolutional neural network (CNN) models: a global classifier and a local warp prediction network. The global classifier distinguishes between manipulated and non-manipulated images, demonstrating superior accuracy, outperforming human subjects significantly, with validation accuracies reaching as high as 97.1%. The local network provides a more granular analysis, predicting specific alterations in the facial geometry and even attempting to reverse the modifications to approximate the original face.

Experimental Results and Analysis

The results indicate the efficacy of the proposed approach in detecting facial manipulations, even surpassing human performance, which hovered slightly above chance level (53.5%). The method's robustness extends beyond controlled datasets to images edited by professional artists, underscoring its applicability in real-world scenarios. Additionally, the models showed resilience against various image perturbations, though robustness to substantial post-processing, such as compression or physical print-to-digital transformations, remains a challenge.

Interestingly, while the models were trained solely on scripted Photoshop data, they exhibited some ability to generalize, responding to warps generated by alternative tools like Facetune and Snapchat Lens Studio. This suggests a potential for broader applicability in detecting face manipulations across different platforms and editing tools.

Implications and Future Directions

The implications of this research are substantial both practically and theoretically. Practically, the study advances tools that can be integrated into media verification workflows to safeguard against the dissemination of manipulated images. The work indirectly contributes to ongoing discussions about digital ethics, privacy, and misinformation.

Theoretically, the use of automatic data generation through scripting standard editing tools opens new avenues in the study of digital forensics. By simulating realistic editing conditions, researchers can better understand the nuances of various manipulation techniques and their detectability.

Going forward, extending the approach to additional types of manipulations such as synthetic skin smoothing or color adjustments could be worthwhile. Furthermore, improvements in the generalizability of models to handle post-processing transformations would enhance their practicability in diverse operational contexts.

In conclusion, the paper provides not just a technical contribution to digital forensics but also a methodological framework that could steer future efforts in detecting and interpreting image manipulations in increasingly sophisticated media landscapes.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.