Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Deepfake Images: Detecting AI-Generated Videos (2404.15955v1)

Published 24 Apr 2024 in cs.CV

Abstract: Recent advances in generative AI have led to the development of techniques to generate visually realistic synthetic video. While a number of techniques have been developed to detect AI-generated synthetic images, in this paper we show that synthetic image detectors are unable to detect synthetic videos. We demonstrate that this is because synthetic video generators introduce substantially different traces than those left by image generators. Despite this, we show that synthetic video traces can be learned, and used to perform reliable synthetic video detection or generator source attribution even after H.264 re-compression. Furthermore, we demonstrate that while detecting videos from new generators through zero-shot transferability is challenging, accurate detection of videos from a new generator can be achieved through few-shot learning.

The paper "Beyond Deepfake Images: Detecting AI-Generated Videos" explores the challenges and methods for detecting videos created by generative AI, beyond the field of deepfake images. The authors identify that existing techniques for detecting AI-generated images fall short when applied to synthetic videos. This discrepancy arises because the traces left by video generators differ significantly from those left by image generators.

Key findings and contributions of the paper include:

  1. Difference in Traces: The paper highlights that the artifacts and traces embedded within AI-generated videos vary from those in still images. This necessitates the development of specialized detection methodologies tailored for video content.
  2. Detection and Attribution: The authors propose a novel approach to learn and recognize these unique video traces, which allows for reliable detection of synthetic videos. This capability extends to attributing the video to its generator, even after the video has undergone H.264 compression—a common format for reducing video file sizes.
  3. Challenges with Zero-Shot Transferability: The paper reveals difficulties in detecting synthetic videos from unseen generators through zero-shot transfer techniques. This indicates that models trained on one set of generative tools do not easily adapt to unknown or new video generators.
  4. Few-Shot Learning for New Generators: Although zero-shot transferability is challenging, the paper demonstrates success in training models to accurately detect videos from new generators using few-shot learning. This approach involves training the model using a limited set of examples from the new generator, which significantly enhances its detection capability.

Overall, this research provides critical insights into the complexities of AI-generated video detection and suggests promising methodologies for improving detection accuracy in the face of evolving generative technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Danial Samadi Vahdati (1 paper)
  2. Tai D. Nguyen (15 papers)
  3. Aref Azizpour (4 papers)
  4. Matthew C. Stamm (17 papers)
Citations (4)