Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

User Preference-aware Fake News Detection (2104.12259v1)

Published 25 Apr 2021 in cs.SI and cs.CL
User Preference-aware Fake News Detection

Abstract: Disinformation and fake news have posed detrimental effects on individuals and society in recent years, attracting broad attention to fake news detection. The majority of existing fake news detection algorithms focus on mining news content and/or the surrounding exogenous context for discovering deceptive signals; while the endogenous preference of a user when he/she decides to spread a piece of fake news or not is ignored. The confirmation bias theory has indicated that a user is more likely to spread a piece of fake news when it confirms his/her existing beliefs/preferences. Users' historical, social engagements such as posts provide rich information about users' preferences toward news and have great potential to advance fake news detection. However, the work on exploring user preference for fake news detection is somewhat limited. Therefore, in this paper, we study the novel problem of exploiting user preference for fake news detection. We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling. Experimental results on real-world datasets demonstrate the effectiveness of the proposed framework. We release our code and data as a benchmark for GNN-based fake news detection: https://github.com/safe-graph/GNN-FakeNews.

User Preference-aware Fake News Detection

The research paper "User Preference-aware Fake News Detection" presents a novel approach to enhancing fake news detection by incorporating user preferences into the analysis framework. This approach is particularly significant given the increasing challenge of disinformation on social media platforms. The authors propose a framework called UPFD (User Preference-aware Fake Detection) that captures user preferences through their historical social media engagements and combines this with traditional content and graph-based techniques to improve detection accuracy.

Key Contributions

  1. Integration of User Preferences: The paper introduces the concept of exploiting user preferences—endogenous signals for fake news detection. This idea is grounded in theories such as Confirmation Bias, which suggests that users are more likely to share information that aligns with their pre-existing beliefs. This is a departure from traditional methods that primarily focus on exogenous features such as the content of the news itself or its propagation patterns.
  2. UPFD Framework: The framework leverages both endogenous (user preferences) and exogenous (news propagation) signals. It essentially models a user's past engagement with social media posts to extract implicit preferences, which are then used to inform fake news detection alongside traditional text and graph analysis.
  3. Numerical Results: The paper reports on experiments conducted on real-world datasets, specifically the FakeNewsNet dataset, which showcases substantial improvements in fake news detection performance. The inclusion of user preferences as modeled by historical social media activity increased accuracy and F1 scores significantly compared to other baseline models.
  4. Benchmark Contribution: The authors have made the UPFD framework and related data available as a benchmark for further development of GNN (Graph Neural Network)-based fake news detection approaches. This serves as an important contribution to the research community by providing a platform for testing and refining similar models.

Methodology

The UPFD framework operates by constructing a user preference representation from the historical posts of users who have engaged with a particular news item. This includes encoding textual information using advanced embeddings such as BERT, and then integrating this with a Graph Neural Network (GNN) to capture the propagation pattern of news on social media.

The GNN serves to fuse the endogenous and exogenous information, where the learned embeddings are used as node features in a propagation graph. This graph, which reflects the spread of news across the social network, supports capturing nuanced dissemination patterns that are indicative of fake news.

Implications and Future Directions

The implications of this research are multi-faceted:

  • Theoretical: It advances the understanding of sociological and psychological factors, such as user bias in information consumption, within computational models. This enriches the theoretical framework for analyzing disinformation online.
  • Practical: The implementation of user preference modeling in fake news detection systems promises more robust and context-aware filtering mechanisms on social media platforms. This can ultimately enhance the reliability of information being propagated to users.
  • Future Developments: The integration of user preferences could be further explored by considering more fine-grained behavioral data or expanding the types of content interactions analyzed. Furthermore, extending this research to incorporate other signaling mechanisms like multimedia content or cross-platform interactions could provide a more holistic view of disinformation dynamics.

Overall, the paper by Dou et al. marks an important step toward more sophisticated fake news detection systems. By acknowledging and leveraging user biases, it sets the stage for developing richer models capable of addressing the multifaceted nature of modern disinformation challenges.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yingtong Dou (19 papers)
  2. Kai Shu (88 papers)
  3. Congying Xia (32 papers)
  4. Philip S. Yu (592 papers)
  5. Lichao Sun (186 papers)
Citations (214)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com