Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterizing and Detecting Hateful Users on Twitter (1803.08977v1)

Published 23 Mar 2018 in cs.CY and cs.SI

Abstract: Most current approaches to characterize and detect hate speech focus on \textit{content} posted in Online Social Networks. They face shortcomings to collect and annotate hateful speech due to the incompleteness and noisiness of OSN text and the subjectivity of hate speech. These limitations are often aided with constraints that oversimplify the problem, such as considering only tweets containing hate-related words. In this work we partially address these issues by shifting the focus towards \textit{users}. We develop and employ a robust methodology to collect and annotate hateful users which does not depend directly on lexicon and where the users are annotated given their entire profile. This results in a sample of Twitter's retweet graph containing $100,386$ users, out of which $4,972$ were annotated. We also collect the users who were banned in the three months that followed the data collection. We show that hateful users differ from normal ones in terms of their activity patterns, word usage and as well as network structure. We obtain similar results comparing the neighbors of hateful vs. neighbors of normal users and also suspended users vs. active users, increasing the robustness of our analysis. We observe that hateful users are densely connected, and thus formulate the hate speech detection problem as a task of semi-supervised learning over a graph, exploiting the network of connections on Twitter. We find that a node embedding algorithm, which exploits the graph structure, outperforms content-based approaches for the detection of both hateful ($95\%$ AUC vs $88\%$ AUC) and suspended users ($93\%$ AUC vs $88\%$ AUC). Altogether, we present a user-centric view of hate speech, paving the way for better detection and understanding of this relevant and challenging issue.

Citations (256)

Summary

  • The paper introduces a novel user-centric methodology that analyzes profile metadata and retweet networks to detect hateful users.
  • It reveals that hateful accounts are newer, more active, and form densely connected clusters, achieving a 95% AUC in detection performance.
  • The study demonstrates that network analysis outperforms traditional content-based methods, offering a robust approach for automated moderation.

Characterizing and Detecting Hateful Users on Twitter: An Analytical Approach

Researchers Manoel Horta Ribeiro et al. present a paper exploring the characteristics and detection of hateful users on Twitter. The paper provides a comprehensive methodology that shifts focus from the commonly examined content-level analysis of hate speech to a user-centric approach. This work endeavors to fill a recognized gap in hate speech detection methods by examining user behaviors and connections rather than just textual content.

Methodology and Data Collection

The researchers address traditional challenges associated with detecting hate speech, such as data incompleteness and the subjectivity of what constitutes hate speech, by concentrating on user profiles instead of isolated tweets. The paper involves a sample of 100,386 users from Twitter's retweet graph, with 4,972 users manually annotated as either hateful or non-hateful. Annotators used Twitter’s guidelines on hateful conduct to label each user. Furthermore, the paper gathered users who were banned over the course of three months following data collection. This approach allowed for the collection of a rich dataset that included diverse user behaviors and potential social connections relevant to hate speech.

Results and Analysis

A substantive contribution of this paper is its comprehensive characterization of hateful users. Key findings include that hateful users tend to have newer accounts, are more active, and do not exhibit typical spam behaviors like excessive hashtag or URL use. They are linguistically complex, using less hate-related lexicon but demonstrating distinct patterns in vocabulary choices often related to emotion and politics. Additionally, hateful users are densely interconnected within the retweet network, contradicting the stereotype of the “lone wolf” and instead indicating that they often operate within tightly knit clusters.

Quantitatively, the researchers employ node embedding algorithms to exploit the network structures within Twitter. This approach outperforms traditional content-based classifiers in detecting both hateful users (with an AUC of 95%) and suspended users (93% AUC). The results notably suggest that network structure and user activity metrics provide a robust basis for detecting potentially harmful users, beyond what can be achieved by analyzing content alone.

Implications

This research provides valuable insights into the dynamics of hate speech on social networks. It demonstrates a viable and effective approach for the detection of hateful users by leveraging network analysis techniques. Practically, this method encourages the implementation of detection systems that are robust to linguistic nuisances such as sarcasm, code-words, and informal language prevalent in social media contexts.

Theoretically, this paper highlights the importance of considering user-centric measures, such as user influence and connectivity, in understanding the spread of harm across networks. The model's success indicates that focusing on the social graph and user interactions offers a complementary layer of analysis to content-based models, providing a fuller picture of user-driven phenomena in social networks.

Future Developments

Future research could expand on this work by applying similar user-centric methods to other platforms or incorporating additional machine learning methodologies to refine detection processes. Moreover, research could explore the ethical dimensions of such user-focused models, particularly concerning privacy and the potential for misclassification in automated moderation systems.

Additionally, exploring the interplay between user influence and the dissemination of hate speech across networks can provide further insights into the dynamics of online communities and aid in the development of policies targeting hate speech on digital platforms. Leveraging user-centric networks provides a necessary pivot from content-based censorship and holds potential for more nuanced and context-aware moderation practices.