The paper "Beyond Deepfake Images: Detecting AI-Generated Videos" explores the challenges and methods for detecting videos created by generative AI, beyond the field of deepfake images. The authors identify that existing techniques for detecting AI-generated images fall short when applied to synthetic videos. This discrepancy arises because the traces left by video generators differ significantly from those left by image generators.
Key findings and contributions of the paper include:
- Difference in Traces: The paper highlights that the artifacts and traces embedded within AI-generated videos vary from those in still images. This necessitates the development of specialized detection methodologies tailored for video content.
- Detection and Attribution: The authors propose a novel approach to learn and recognize these unique video traces, which allows for reliable detection of synthetic videos. This capability extends to attributing the video to its generator, even after the video has undergone H.264 compression—a common format for reducing video file sizes.
- Challenges with Zero-Shot Transferability: The paper reveals difficulties in detecting synthetic videos from unseen generators through zero-shot transfer techniques. This indicates that models trained on one set of generative tools do not easily adapt to unknown or new video generators.
- Few-Shot Learning for New Generators: Although zero-shot transferability is challenging, the paper demonstrates success in training models to accurately detect videos from new generators using few-shot learning. This approach involves training the model using a limited set of examples from the new generator, which significantly enhances its detection capability.
Overall, this research provides critical insights into the complexities of AI-generated video detection and suggests promising methodologies for improving detection accuracy in the face of evolving generative technologies.