Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection (2209.14642v1)

Published 29 Sep 2022 in cs.CL

Abstract: Existing fake news detection methods aim to classify a piece of news as true or false and provide veracity explanations, achieving remarkable performances. However, they often tailor automated solutions on manual fact-checked reports, suffering from limited news coverage and debunking delays. When a piece of news has not yet been fact-checked or debunked, certain amounts of relevant raw reports are usually disseminated on various media outlets, containing the wisdom of crowds to verify the news claim and explain its verdict. In this paper, we propose a novel Coarse-to-fine Cascaded Evidence-Distillation (CofCED) neural network for explainable fake news detection based on such raw reports, alleviating the dependency on fact-checked ones. Specifically, we first utilize a hierarchical encoder for web text representation, and then develop two cascaded selectors to select the most explainable sentences for verdicts on top of the selected top-K reports in a coarse-to-fine manner. Besides, we construct two explainable fake news datasets, which are publicly available. Experimental results demonstrate that our model significantly outperforms state-of-the-art baselines and generates high-quality explanations from diverse evaluation perspectives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhiwei Yang (43 papers)
  2. Jing Ma (136 papers)
  3. Hechang Chen (26 papers)
  4. Hongzhan Lin (33 papers)
  5. Ziyang Luo (35 papers)
  6. Yi Chang (150 papers)
Citations (9)