Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Community Interaction and Conflict on the Web (1803.03697v1)

Published 9 Mar 2018 in cs.SI, cs.CL, and cs.HC

Abstract: Users organize themselves into communities on web platforms. These communities can interact with one another, often leading to conflicts and toxic interactions. However, little is known about the mechanisms of interactions between communities and how they impact users. Here we study intercommunity interactions across 36,000 communities on Reddit, examining cases where users of one community are mobilized by negative sentiment to comment in another community. We show that such conflicts tend to be initiated by a handful of communities---less than 1% of communities start 74% of conflicts. While conflicts tend to be initiated by highly active community members, they are carried out by significantly less active members. We find that conflicts are marked by formation of echo chambers, where users primarily talk to other users from their own community. In the long-term, conflicts have adverse effects and reduce the overall activity of users in the targeted communities. Our analysis of user interactions also suggests strategies for mitigating the negative impact of conflicts---such as increasing direct engagement between attackers and defenders. Further, we accurately predict whether a conflict will occur by creating a novel LSTM model that combines graph embeddings, user, community, and text features. This model can be used toreate early-warning systems for community moderators to prevent conflicts. Altogether, this work presents a data-driven view of community interactions and conflict, and paves the way towards healthier online communities.

Citations (323)

Summary

  • The paper identifies that less than 1% of Reddit communities trigger approximately 74% of conflicts, providing a nuanced understanding of negative online interactions.
  • The paper employs a socially-primed LSTM model that combines graph embeddings with user, community, and textual features to achieve an AUC of 0.76 in predicting conflicts.
  • The paper demonstrates that increased direct engagement between attackers and defenders can lessen echo chamber effects and improve community resilience.

An Analysis of Community Interaction and Conflict on the Web: Insights from Reddit

The paper "Community Interaction and Conflict on the Web" presents an extensive analysis of intercommunity interactions on the Reddit platform, elucidating the dynamics and impacts of such interactions, especially when they take a negative turn. The authors have undertaken a data-driven approach to parse through a vast dataset comprising 1.8 billion comments from over 100 million users across 36,000 Reddit communities, thus providing a granular view into how communities interact and engage in conflicts on the web.

A salient discovery from this work indicates that a minuscule proportion of communities is responsible for the majority of conflicts. Precisely, less than 1% of the communities initiate about 74% of the conflicts. This finding underscores the presence of community hubs that are prolific in initiating negative interactions. These interactions are frequently marked by the formation of echo chambers, where members predominantly communicate with others from their community. The echo chamber effect is a critical marker of conflicts and has a pronounced effect on user behavior and community dynamics.

In terms of implications, the paper identifies that conflicts have adverse long-term impacts on targeted communities. These conflicts often lead to a decrease in overall user activity within the targeted communities, a phenomenon referred to as “colonization,” where attackers become more prevalent in the target community post-conflict, while the defenders diminish their participation.

Additionally, the authors suggest strategies to mitigate such adverse impacts based on their analysis of user interactions. Their findings demonstrate that increased direct engagement between attackers and defenders might help reduce the negative effects of conflict. This engagement acts as a deterrent for the formation of echo chambers and suggests a more involved defense strategy could be more beneficial compared to isolation.

On the technical front, the authors developed a novel predictive model—a socially-primed Long Short-Term Memory (LSTM) model—that effectively forecasts potential conflicts. This model amalgamates graph embeddings with user, community, and textual features to predict mobilization events, achieving an impressive AUC of 0.76. This predictive capability could be instrumental in informing early-warning systems for community moderators, enabling them to respond proactively to avert the adverse effects of intercommunity conflicts.

The paper’s findings have practical implications for the design of community management practices on large-scale platforms. These insights might guide moderators in pinpointing communities or users that exhibit tendencies towards initiating conflict. The capability to forecast potential conflicts allows for timely interventions that could foster healthier discourse across community boundaries.

From a theoretical standpoint, the paper extends existing understandings of intergroup conflict by transferring concepts from offline interactions to the digital field, highlighting the modern complexities associated with community interactions on large-scale platforms. The paper’s methodological approach and findings set a precedence for future research examining digital intercommunity dynamics and offer a framework applicable to other platforms with similar community structures.

In summation, the paper provides thought-provoking insights into the dynamics of intercommunity interactions online. The work not only furthers theoretical understandings in computational social sciences but also informs practical interventions aimed at fostering civility and reducing harm from negative community interactions, marking a meaningful contribution to the discourse on online community management and conflict mitigation. Future developments in AI could leverage these findings to automate aspects of digital community moderation and conflict prediction, ensuring healthier online ecosystems.