Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Official-NV: An LLM-Generated News Video Dataset for Multimodal Fake News Detection (2407.19493v3)

Published 28 Jul 2024 in cs.CV, cs.AI, and cs.MM

Abstract: News media, especially video news media, have penetrated into every aspect of daily life, which also brings the risk of fake news. Therefore, multimodal fake news detection has recently garnered increased attention. However, the existing datasets are comprised of user-uploaded videos and contain an excess amounts of superfluous data, which introduces noise into the model training process. To address this issue, we construct a dataset named Official-NV, comprising officially published news videos. The crawl officially published videos are augmented through the use of LLMs-based generation and manual verification, thereby expanding the dataset. We also propose a new baseline model called OFNVD, which captures key information from multimodal features through a GLU attention mechanism and performs feature enhancement and modal aggregation via a cross-modal Transformer. Benchmarking the dataset and baselines demonstrates the effectiveness of our model in multimodal news detection.

Summary

  • The paper presents Official-NV, a novel dataset constructed from 10,000 Xinhua news videos with balanced real and fake labels for multimodal detection.
  • It employs a combination of LLM-generated modifications and manual curation, achieving 77.5% accuracy by integrating titles, video frames, and speech text.
  • The study highlights the impact of modality consistency, with title data alone yielding 67.9% accuracy, underscoring its importance in fake news detection.

An Examination of the Official-NV Dataset for Multimodal Fake News Detection

The paper "OFFICIAL-NV: A NEWS VIDEO DATASET FOR MULTIMODAL FAKE NEWS DETECTION" introduces a novel dataset aimed at addressing the challenges in the detection of multimodal fake news. With the prevalence of multimodal content combining text, images, and videos on digital platforms, the necessity for robust detection mechanisms is paramount, and this paper presents a significant effort in this domain.

Overview of the Official-NV Dataset

The Official-NV dataset is constructed from officially published news videos from the Xinhua media platform, diverging from previous datasets that are predominantly composed of user-generated content. The authors argue that data quality and authenticity are inherently more reliable when sourced from official outlets compared to common user-uploaded videos, which often suffer from inconsistencies and irrelevant content.

Dataset Composition:

  • The dataset comprises 10,000 videos with an equal distribution of 5,000 real videos and 5,000 fake videos.
  • Each video in the dataset includes three modalities: titles, video frames, and speech text.
  • The creation of fake news entries involves altering one of the modalities such that it is inconsistent with the others, thereby simulating the nature of multimodal misinformation.

As described, the Official-NV dataset uniquely positions itself by maintaining high video quality and official source credibility, a departure from datasets relying on platforms such as TikTok, Twitter, and YouTube.

Methodology and Experimentation

The authors employed a combination of LLMs and manual modification to extend and curate the dataset. The experimental framework involves the utilization of BART for feature extraction and subsequent classification tasks.

Key Findings from the Experimentation:

  • Multimodal integration significantly enhances detection efficacy. When utilizing all three modalities, the model achieves a higher accuracy of 77.5% compared to reliance on single modalities.
  • Title information was found to be especially crucial, yielding a noteworthy accuracy of 67.9% when used in isolation.

These results underline the value of synthesizing diverse data modalities to improve the robustness of detection systems. The authors' emphasis on modality consistency highlights a critical facet of the model’s performance in distinguishing real from fake content.

Implications and Future Work

The Official-NV dataset paves the way for advanced research in the field of fake news detection by providing a high-quality base for machine learning training and evaluation. Its reliance on officially published videos addresses the oft-cited issue of data validity, making it a potentially indispensable resource for future computational journalism efforts.

Speculative Future Directions:

  • Further exploration into integrating additional data sources could enrich the dataset's diversity, catering to more comprehensive detection frameworks.
  • Development and testing of more sophisticated algorithms that can capitalize on the detailed features of multifaceted datasets like Official-NV may yield superior detection capabilities.

The paper's contribution is particularly noteworthy in the context of the growing multimedia landscape, presenting opportunities for developing scalable solutions that can effectively manage the complexity of modern misinformation. The official provenance and structured composition of Official-NV mark an important step toward enhancing the integrity and accuracy of fake news detection methodologies.

Youtube Logo Streamline Icon: https://streamlinehq.com