- The paper develops a predictive model with over 80% AUC that identifies future banned users from their early posts.
- The longitudinal analysis shows that user behavior degrades over time as harsh community feedback can worsen antisocial conduct.
- The study uncovers distinct subcategories among banned users based on linguistic patterns and deletion rates, informing better moderation strategies.
Overview of Antisocial Behavior in Online Discussion Communities
The paper "Antisocial Behavior in Online Discussion Communities" by Cheng, Danescu-Niculescu-Mizil, and Leskovec provides a comprehensive analysis of antisocial behavior on online platforms such as CNN.com, Breitbart.com, and IGN.com. By leveraging a vast dataset comprising millions of posts and user interactions, the authors investigate the characteristics and dynamics of antisocial behavior exhibited by users who are eventually banned from these communities.
Initially, the paper demonstrates that Future-Banned Users (FBUs) exhibit identifiable traits that distinguish them from Never-Banned Users (NBUs). They tend to engage in a concentrated number of threads, use less readable language, and employ language that is more likely to incite conflict. Furthermore, FBUs are more successful in eliciting responses, which suggests a penchant for stirring protracted discussions.
The longitudinal analysis of FBUs reveals that these users do not start as markedly different from NBUs in terms of behavior. However, over time, they post content that progressively worsens in quality when compared to the overall community standards. Simultaneously, community tolerance for such users decreases, resulting in a higher rate of their posts being deleted. The research indicates that harsh feedback can aggravate antisocial behavior, raising the concern that current moderation practices may sometimes counterproductively encourage rather than diminish trolling.
The authors also uncover the existence of distinct subcategories within FBUs based on the rate of post deletions. Some users maintain a consistently high rate of deletions, while others only see such rates rise toward the later periods of their activity within the community. The introduction of a typology of antisocial users highlights the nuanced nature of such behaviors and the varied reactions from the community.
A significant contribution of this research is the development of a predictive model that identifies FBUs early in their online community life. By examining a user's initial posts, the model achieves accurate predictions, with over 80% area under the ROC curve, regarding whether a user will eventually be banned. This capability holds substantial practical value for moderators who can leverage the model to proactively manage user behavior and maintain community health.
The findings have both theoretical and practical implications. On a theoretical level, they advance our understanding of the development and dynamics of antisocial behavior in online environments. Practically, the research proposes viable solutions for moderating and potentially curtailing antisocial behavior by enabling the early identification and intervention of likely FBUs.
Looking ahead, future work could refine the predictive models by integrating finer measures of linguistic cues and context, potentially considering cross-platform dynamics to understand how users might transfer antisocial habits across communities. Moreover, investigating differing community strategies on user redemption and rehabilitation could yield valuable insights into better moderation practices.
In summary, this paper elevates our understanding of antisocial behavior in online communities through large-scale data analysis, providing nuanced observations and effective tools to address this pervasive issue on digital platforms.