Papers
Topics
Authors
Recent
Search
2000 character limit reached

Interpretable Rumor Detection in Microblogs by Attending to User Interactions

Published 29 Jan 2020 in cs.CL, cs.IR, and cs.SI | (2001.10667v1)

Abstract: We address rumor detection by learning to differentiate between the community's response to real and fake claims in microblogs. Existing state-of-the-art models are based on tree models that model conversational trees. However, in social media, a user posting a reply might be replying to the entire thread rather than to a specific user. We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network. We investigated variants of this model: (1) a structure aware self-attention model (StA-PLAN) that incorporates tree structure information in the transformer network, and (2) a hierarchical token and post-level attention model (StA-HiTPLAN) that learns a sentence representation with token-level self-attention. To the best of our knowledge, we are the first to evaluate our models on two rumor detection data sets: the PHEME data set as well as the Twitter15 and Twitter16 data sets. We show that our best models outperform current state-of-the-art models for both data sets. Moreover, the attention mechanism allows us to explain rumor detection predictions at both token-level and post-level.

Citations (175)

Summary

  • The paper introduces the Post-Level Attention Model (PLAN), a transformer-based approach leveraging multi-head attention to capture complex user interactions for improved rumor detection in microblogs.
  • Variants like StA-PLAN and StA-HiTPLAN integrate structural awareness and hierarchical token-level attention to refine the model's ability to process conversation threads and subtle linguistic cues.
  • Evaluated on PHEME, Twitter15, and Twitter16 datasets, the models outperformed existing state-of-the-art methods, with the best-performing variant achieving 85.2% accuracy on the Twitter15 dataset.

Interpretable Rumor Detection in Microblogs by Attending to User Interactions

The paper "Interpretable Rumor Detection in Microblogs by Attending to User Interactions" by Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, and Jing Jiang addresses the pivotal challenge of detecting rumors within microblogs by leveraging the community's response to claims. Existing state-of-the-art methodologies primarily employ tree models which structure conversational threads in a hierarchical manner. However, the authors argue that these models are suboptimal for social media environments, where replies are often directed at the entire conversation thread rather than specific users.

To overcome these constraints, the paper introduces the Post-Level Attention Model (PLAN), which employs a transformer network's multi-head attention mechanism to capture long-distance interactions between tweets. Two variants of this model are explored:

  1. Structure Aware Self-Attention Model (StA-PLAN): This variant incorporates tree structure information into the transformer network.
  2. Hierarchical Token and Post-Level Attention Model (StA-HiTPLAN): This model learns sentence representations with token-level self-attention, providing more nuanced detection capabilities.

These models have been evaluated on several datasets, namely PHEME, Twitter15, and Twitter16, outperforming existing state-of-the-art models across all data sets, suggesting that PLAN effectively captures the detailed interactions and cascading implications of user comments.

Key Contributions

  • Post-Level Interaction Modeling:

PLAN facilitates comprehensive modeling of tweet interactions beyond parent-child nodes, allowing for a more holistic evaluation of a thread's sentiment and credibility.

  • Structural Awareness:

StA-PLAN incorporates structural awareness to refine attention mechanisms, maintaining the strengths of tree models while benefiting from the flexibility of planar models.

StA-HiTPLAN utilizes token-level attention to develop intricate sentence representations, enabling the model to identify subtle linguistic cues indicative of rumors.

  • Empirical Validation:

The models have demonstrated their effectiveness through rigorous testing on multiple datasets, significantly advancing the calibration of rumor detection mechanisms over former models. Specifically, the best-performing models offered improvements in prediction accuracy for Twitter15 and Twitter16 datasets. The Twitter15 dataset saw an accuracy increase to 85.2%, compared to previous models, illustrating the capability of the PLAN and its variants.

Implications and Future Directions

This research significantly advances the field of automated rumor detection on social media by demonstrating the effectiveness of transformer networks in modeling complex user interaction dynamics. The attention mechanism provides interpretability, allowing researchers and practitioners to understand the model's prediction processes on both token-level and post-level.

However, the study also reveals that while improved models significantly outperform existing methodologies, challenges remain, especially in cross-domain applications evidenced by the PHEME dataset's performance variability. Future studies could explore integration with user credibility information and multi-modal verification against external trustworthy sources.

The proposed models serve as a foundational advancement in understanding social media dynamics and their implications for rumor propagation, highlighting potential paths for further exploration in enhancing automated verification systems in the field of AI. Continued research could focus on optimizing event-agnostic features to handle diverse datasets, enriching the capability for broad application across different social media contexts.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.