Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

It Takes Nine to Smell a Rat: Neural Multi-Task Learning for Check-Worthiness Prediction (1908.07912v1)

Published 19 Aug 2019 in cs.CL and cs.AI

Abstract: We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Slavena Vasileva (1 paper)
  2. Pepa Atanasova (27 papers)
  3. Lluís Màrquez (31 papers)
  4. Preslav Nakov (253 papers)
  5. Alberto Barrón-Cedeño (25 papers)
Citations (47)