Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms (2409.14464v1)

Published 22 Sep 2024 in cs.CL and cs.SI

Abstract: Automatic detection of online hate speech serves as a crucial step in the detoxification of the online discourse. Moreover, accurate classification can promote a better understanding of the proliferation of hate as a social phenomenon. While most prior work focus on the detection of hateful utterances, we argue that focusing on the user level is as important, albeit challenging. In this paper we consider a multimodal aggregative approach for the detection of hate-mongers, taking into account the potentially hateful texts, user activity, and the user network. We evaluate our methods on three unique datasets X (Twitter), Gab, and Parler showing that a processing a user's texts in her social context significantly improves the detection of hate mongers, compared to previously used text and graph-based methods. Our method can be then used to improve the classification of coded messages, dog-whistling, and racial gas-lighting, as well as inform intervention measures. Moreover, our approach is highly efficient even for very large datasets and networks.

Summary

  • The paper introduces a multimodal aggregative method that shifts analysis from individual posts to user-level behavior for more precise hate speech detection.
  • It demonstrates that combining textual, relational, and distributional insights significantly boosts detection accuracy, outperforming traditional models.
  • The approach offers practical benefits for real-time moderation and targeted interventions, enhancing efforts to curb online hate on varied social platforms.

AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms

The paper, "AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms," by Tom Marzea, Abraham Israeli, and Oren Tsur, presents novel methodologies for detecting malicious actors on social media. By shifting from post-level to user-level analysis, the authors argue, detection of hate speech can be significantly improved. This transition considers user activity within their social networks, enabling robust identification of patterns related to hate speech. This paper is contextualized within a growing body of research on automated hate speech detection prompted by the rising instances of online hatred targeting minorities.

Methodology

The authors introduce a multimodal aggregative approach that leverages multiple data sources, namely the textual content of user posts, user activity, and their network connections. These methods are designed to address several challenges inherent to post-level classification, such as context loss and difficulty in identifying nuanced forms of hate speech like coded language and racial gas-lighting. The three aggregative methods proposed are:

  1. Naive Aggregation with Fixed Threshold: This approach counts the number of posts classified as hate speech above a certain threshold.
  2. Relational Aggregation: It considers the user's social network context by aggregating information from the user’s followers and followees.
  3. Distributional Aggregation: This approach uses both bin-based and quantile-based representations of the distribution of hate scores across the user's posts.

These aggregative methods are combined into a multimodal classification model that aims to improve detection accuracy while maintaining computational efficiency.

Experiments and Evaluation

The paper evaluates these methods using data from three distinct social platforms (Twitter, Gab, and Parler) that each present unique challenges. The results consistently demonstrate that the multimodal aggregative approach outperforms traditional methods such as DeGroot's Diffusion and various Graph Neural Networks (GNNs) including GCN, GAT, GraphSAGE, and AGNN.

Key Findings:

  • The multimodal aggregative approach achieved significantly higher F1 scores across all datasets compared to the baseline models.
  • Relational aggregation was particularly effective in the Parler dataset, suggesting the importance of social context in identifying hate speech within specific platforms.
  • The distributional aggregation (both bins and quantiles) performed well, highlighting the utility of nuanced analysis of user's post distributions.

Detailed results show that the proposed methods better handle the subtleties of hate speech by aggregating weak signals from multiple posts, which isolated methods often miss. For example, even ambiguous or coded hate speech, when viewed in aggregate, indicates patterns of behavior more clearly, leading to higher detection rates.

Implications and Future Directions

The paper's findings have substantial implications for both practical applications and theoretical development in AI and social computing. By efficiently identifying hate-mongers, these methods have direct utility in real-time moderation, user management, and intervention strategies within social platforms. For instance, the ability to detect users who subtly spread hate speech can lead to more effective moderation policies and the crafting of tailored interventions.

Theoretically, the results foster a deeper understanding of how hate speech spreads within online communities. Shifting focus from isolated posts to user behavior and network context provides a more holistic perspective on digital hate phenomena.

Future Research Directions:

  1. Integration and Optimization: Further work is needed to refine the integration of the multiple aggregation methods for different social platforms, as their unique characteristics may influence model performance.
  2. Contextual Nuances: Enhancing the model's ability to interpret nuanced and emerging forms of hate speech is crucial. Continuous updating of the training data to include new-coded languages and trolling tactics is necessary.
  3. Robustness and Scalability: Exploring the robustness of these methods against adversarial attacks and ensuring scalability to monitor larger networks in real-time is another critical area of future research.

Conclusion

The work presented in “AggregHate: An Efficient Aggregative Approach for the Detection of Hatemongers on Social Platforms” offers significant enhancements over traditional text-based and graph-based methods. By combining textual analysis with relational and distributional aggregative methods, the authors provide a more comprehensive approach to detect and understand hate speech online. The application of these methods in real-world scenarios can greatly aid in creating safer online environments, fostering inclusive and respectful digital spaces.

Overall, this paper advances the field of computational social science and AI by introducing efficient, scalable, and contextually aware methods for tackling the pervasive problem of online hate speech. This work sets a foundation for new approaches in multimodal hate speech detection and opens several avenues for future research to build upon.

X Twitter Logo Streamline Icon: https://streamlinehq.com