Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAMBRETTA: Learning to Rank for Twitter Soft Moderation (2212.05926v1)

Published 12 Dec 2022 in cs.CR, cs.CY, and cs.SI

Abstract: To curb the problem of false information, social media platforms like Twitter started adding warning labels to content discussing debunked narratives, with the goal of providing more context to their audiences. Unfortunately, these labels are not applied uniformly and leave large amounts of false content unmoderated. This paper presents LAMBRETTA, a system that automatically identifies tweets that are candidates for soft moderation using Learning To Rank (LTR). We run LAMBRETTA on Twitter data to moderate false claims related to the 2020 US Election and find that it flags over 20 times more tweets than Twitter, with only 3.93% false positives and 18.81% false negatives, outperforming alternative state-of-the-art methods based on keyword extraction and semantic search. Overall, LAMBRETTA assists human moderators in identifying and flagging false information on social media.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pujan Paudel (9 papers)
  2. Jeremy Blackburn (76 papers)
  3. Emiliano De Cristofaro (117 papers)
  4. Savvas Zannettou (55 papers)
  5. Gianluca Stringhini (77 papers)
Citations (7)