Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Label Noise-Resistant Mean Teaching for Weakly Supervised Fake News Detection (2206.12260v1)

Published 10 Jun 2022 in cs.CL

Abstract: Fake news spreads at an unprecedented speed, reaches global audiences and poses huge risks to users and communities. Most existing fake news detection algorithms focus on building supervised training models on a large amount of manually labeled data, which is expensive to acquire or often unavailable. In this work, we propose a novel label noise-resistant mean teaching approach (LNMT) for weakly supervised fake news detection. LNMT leverages unlabeled news and feedback comments of users to enlarge the amount of training data and facilitates model training by generating refined labels as weak supervision. Specifically, LNMT automatically assigns initial weak labels to unlabeled samples based on semantic correlation and emotional association between news content and the comments. Moreover, in order to suppress the noises in weak labels, LNMT establishes a mean teacher framework equipped with label propagation and label reliability estimation. The framework measures a weak label similarity matrix between the teacher and student networks, and propagates different valuable weak label information to refine the weak labels. Meanwhile, it exploits the consistency between the output class likelihood vectors of the two networks to evaluate the reliability of the weak labels and incorporates the reliability into model optimization to alleviate the negative effect of noisy weak labels. Extensive experiments show the superior performance of LNMT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jingyi Xie (17 papers)
  2. Jiawei Liu (156 papers)
  3. Zheng-Jun Zha (143 papers)
Citations (3)