Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Antisocial Behavior in Online Discussion Communities (1504.00680v2)

Published 2 Apr 2015 in cs.SI, cs.CY, stat.AP, and stat.ML

Abstract: User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.

Citations (306)

Summary

  • The paper develops a predictive model with over 80% AUC that identifies future banned users from their early posts.
  • The longitudinal analysis shows that user behavior degrades over time as harsh community feedback can worsen antisocial conduct.
  • The study uncovers distinct subcategories among banned users based on linguistic patterns and deletion rates, informing better moderation strategies.

Overview of Antisocial Behavior in Online Discussion Communities

The paper "Antisocial Behavior in Online Discussion Communities" by Cheng, Danescu-Niculescu-Mizil, and Leskovec provides a comprehensive analysis of antisocial behavior on online platforms such as CNN.com, Breitbart.com, and IGN.com. By leveraging a vast dataset comprising millions of posts and user interactions, the authors investigate the characteristics and dynamics of antisocial behavior exhibited by users who are eventually banned from these communities.

Initially, the paper demonstrates that Future-Banned Users (FBUs) exhibit identifiable traits that distinguish them from Never-Banned Users (NBUs). They tend to engage in a concentrated number of threads, use less readable language, and employ language that is more likely to incite conflict. Furthermore, FBUs are more successful in eliciting responses, which suggests a penchant for stirring protracted discussions.

The longitudinal analysis of FBUs reveals that these users do not start as markedly different from NBUs in terms of behavior. However, over time, they post content that progressively worsens in quality when compared to the overall community standards. Simultaneously, community tolerance for such users decreases, resulting in a higher rate of their posts being deleted. The research indicates that harsh feedback can aggravate antisocial behavior, raising the concern that current moderation practices may sometimes counterproductively encourage rather than diminish trolling.

The authors also uncover the existence of distinct subcategories within FBUs based on the rate of post deletions. Some users maintain a consistently high rate of deletions, while others only see such rates rise toward the later periods of their activity within the community. The introduction of a typology of antisocial users highlights the nuanced nature of such behaviors and the varied reactions from the community.

A significant contribution of this research is the development of a predictive model that identifies FBUs early in their online community life. By examining a user's initial posts, the model achieves accurate predictions, with over 80% area under the ROC curve, regarding whether a user will eventually be banned. This capability holds substantial practical value for moderators who can leverage the model to proactively manage user behavior and maintain community health.

The findings have both theoretical and practical implications. On a theoretical level, they advance our understanding of the development and dynamics of antisocial behavior in online environments. Practically, the research proposes viable solutions for moderating and potentially curtailing antisocial behavior by enabling the early identification and intervention of likely FBUs.

Looking ahead, future work could refine the predictive models by integrating finer measures of linguistic cues and context, potentially considering cross-platform dynamics to understand how users might transfer antisocial habits across communities. Moreover, investigating differing community strategies on user redemption and rehabilitation could yield valuable insights into better moderation practices.

In summary, this paper elevates our understanding of antisocial behavior in online communities through large-scale data analysis, providing nuanced observations and effective tools to address this pervasive issue on digital platforms.