Papers
Topics
Authors
Recent
2000 character limit reached

X-posing Free Speech: Examining the Impact of Moderation Relaxation on Online Social Networks (2404.11465v2)

Published 17 Apr 2024 in cs.SI

Abstract: We investigate the impact of free speech and the relaxation of moderation on online social media platforms using Elon Musk's takeover of Twitter as a case study. By curating a dataset of over 10 million tweets, our study employs a novel framework combining content and network analysis. Our findings reveal a significant increase in the distribution of certain forms of hate content, particularly targeting the LGBTQ+ community and liberals. Network analysis reveals the formation of cohesive hate communities facilitated by influential bridge users, with substantial growth in interactions hinting at increased hate production and diffusion. By tracking the temporal evolution of PageRank, we identify key influencers, primarily self-identified far-right supporters disseminating hate against liberals and woke culture. Ironically, embracing free speech principles appears to have enabled hate speech against the very concept of freedom of expression and free speech itself. Our findings underscore the delicate balance platforms must strike between open expression and robust moderation to curb the proliferation of hate online.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Understanding the effect of deplatforming on social networks. In Proceedings of the 13th ACM Web Science Conference 2021, pages 187–195.
  2. Predicting anti-Asian hateful users on Twitter during COVID-19. In Findings of EMNLP.
  3. Dimosthenis Antypas and Jose Camacho-Collados. 2023. Robust hate speech detection in social media: A cross-dataset empirical evaluation. In The 7th Workshop on Online Abuse and Harms (WOAH), pages 231–242, Toronto, Canada. Association for Computational Linguistics.
  4. Hate speech spikes on twitter after elon musk acquires the platform. School of Communication and Media, Montclair State University.
  5. Quarantined! examining the effects of a community-wide moderation intervention on reddit. ACM Transactions on Computer-Human Interaction (TOCHI), 29(4):1–26.
  6. You can’t stay here: The efficacy of reddit’s 2015 ban examined through hate speech. Proc. ACM Hum.-Comput. Interact., 1(CSCW).
  7. Antisocial behavior in online discussion communities. In Proceedings of the international aaai conference on web and social media, volume 9, pages 61–70.
  8. Amanda LL Cullen and Sanjay R Kairam. 2022. Practicing moderation: Community moderation as reflective practice. Proceedings of the ACM on Human-computer Interaction, 6(CSCW1):1–32.
  9. You too brutus! trapping hateful users in social media: Challenges, solutions & insights. In Proceedings of the 32nd ACM Conference on Hypertext and Social Media, HT ’21, page 79–89, New York, NY, USA. Association for Computing Machinery.
  10. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media, volume 11, pages 512–515.
  11. Mixed messages? the limits of automated social media content analysis. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 106–106. PMLR.
  12. Frederic Guerrero-Solé. 2018. Interactive behavior in political discussions on twitter: Politicians, media, and citizens’ patterns of interaction in the 2015 and 2016 electoral campaigns in spain. Social Media + Society, 4(4).
  13. Auditing elon musk’s impact on hate speech and bots. ICWSM, 17(1).
  14. Do platform migrations compromise content moderation? evidence from r/the_donald and r/incels. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2):1–24.
  15. Early Detection of Online Hate Speech Spreaders with Learned User Representations—Notebook for PAN at CLEF 2021. In CLEF 2021 Labs and Workshops, Notebook Papers. CEUR-WS.org.
  16. Abraham Israeli and Oren Tsur. 2022. Free speech or free hate speech? analyzing the proliferation of hate speech in parler. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 109–121, Seattle, Washington (Hybrid). Association for Computational Linguistics.
  17. Crowdsourcing civility: A natural experiment examining the effects of distributed moderation in online forums. Government Information Quarterly, 31(2):317–326.
  18. Online harassment, digital abuse, and cyberstalking in America. Data and Society Research Institute.
  19. Syntactic annotations for the google books ngram corpus. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, Volume 2: Demo Papers (ACL ’12).
  20. Thou shalt not hate: Countering online hate speech. Proceedings of the International AAAI Conference on Web and Social Media, 13(01):369–380.
  21. J. Nathan Matias. 2019a. The civic labor of volunteer moderators online. Social Media + Society, 5(2):2056305119836778.
  22. J Nathan Matias. 2019b. Preventing harassment and increasing group participation through social norms in 2,190 online science discussions. Proceedings of the National Academy of Sciences, 116(20):9785–9789.
  23. Fightin’ words: Lexical feature selection and evaluation for identifying the content of political conflict. Political Analysis, 16(4):372–403.
  24. Toxicity detection: Does context really matter? arXiv preprint arXiv:2006.00998.
  25. Leveraging intra-user and inter-user representation learning for automated hate speech detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 118–123, New Orleans, Louisiana. Association for Computational Linguistics.
  26. Characterizing and detecting hateful users on twitter. Proceedings of the International AAAI Conference on Web and Social Media, 12(1).
  27. Does the musk twitter takeover matter? political influencers, their arguments, and the quality of information they share. Socius, 9.
  28. Prevalence and psychological effects of hateful speech in online college communities. In Proceedings of the 10th ACM Conference on Web Science, WebSci ’19, page 255–264, New York, NY, USA. Association for Computing Machinery.
  29. On the rise of fear speech in online social media. PNAS, 120(11):e2212270120.
  30. Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5):1763–1768.
  31. Joseph Seering and Sanjay R Kairam. 2023. Who moderates on twitch and what do they do? quantifying practices in community moderation on twitch. Proceedings of the ACM on Human-Computer Interaction, 7(GROUP):1–18.
  32. The hypervisibility and discourses of ‘wokeness’ in digital culture. Media, Culture & Society, 44(8):1576–1587.
  33. “go eat a bat, chang!”: On the emergence of sinophobic behavior on web communities in the face of covid-19. In TheWeb.
  34. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84, Vancouver, BC, Canada. Association for Computational Linguistics.
  35. Deep fusion of multimodal features for social media retweet time prediction. TheWeb.
  36. What is gab: A bastion of free speech or an alt-right echo chamber. In Companion Proceedings of the The Web Conference 2018, WWW ’18, page 1007–1014, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
  37. Veli Şafak and Aniish Sridhar. 2022. Elon musk’s twitter takeover: Politician accounts. ArXiv, abs/2205.08491.
Citations (1)

Summary

  • The paper reveals that the relaxation of moderation policies led to a 50.5% increase in hate speech, demonstrating harmful content proliferation.
  • The paper employs content and network analyses on over 10 million tweets to show how hate communities merge and become more interconnected.
  • The paper identifies influential users using a novel Moving PageRank approach, highlighting key nodes that amplify hate speech spread.

Examining the Impact of Moderation Relaxation on Online Social Networks

Introduction

The research explores the consequences of moderation relaxation on social media platforms, using Elon Musk's acquisition of Twitter as a case study. By conducting content and network analyses on a dataset of over 10 million tweets, the study investigates the proliferation of hate speech and the network dynamics that emerge when moderation is relaxed. The findings highlight significant increases in hate speech, particularly targeting marginalized groups, and uncover the formation of cohesive hate communities. These observations underscore the need for social media platforms to balance open discourse with effective moderation to mitigate the spread of harmful content.

Dataset and Methodology

The dataset comprises tweets collected using Twitter's Academic API, focusing on interactions pre- and post-takeover. Data collection was exhaustive, encompassing tweets containing ethnic slurs from a curated list, as well as user timelines of those deemed to post significant hateful content. The timeline spans a month before to a month after the acquisition, providing a comprehensive view of user interactions and content dissemination. The analysis employs several models for hate speech classification and network dynamic evaluations.

Changes in Hate Speech Landscape

The study observes a considerable increase in hate speech, with categories such as racism showing a 50.5% increase in composition post-takeover. The analysis identifies prominent shifts in representative language, with terms like 'n****r' and 'com*ie' increasing in prevalence, highlighting a significant rise in certain forms of hate speech. Figure 1

Figure 1

Figure 1: 2 weeks before the takeover.

The semantic analysis also reveals a notable association between terms such as 'Free Speech' and rhetoric tied to political agendas, illustrating how certain political ideologies gain traction under relaxed moderation rules (Table \ref{tab:context-table}).

Network Dynamics and Community Formation

Following the takeover, the study shows that hate interaction networks became denser and more interconnected, due in part to influential bridge users. The average growth rate of network nodes doubled post-relaxation, while the number of separate communities decreased by 17%, suggesting a merging of previously isolated communities into larger clusters. Figure 2

Figure 2: Rate of growth of the average degree centrality of nodes increases by 144.44\% post-takeover.

Figure 3

Figure 3: Rate of growth of the number of connected components decreases by 17.3\% post-takeover.

This enhanced connectivity points to a troubling pattern wherein relaxed moderation catalyzes not just more hate content but also its amplified dissemination through extensive networks.

Identification of Influential Users

The research identifies pivotal nodes in the hate speech network using a novel Moving PageRank (MPR) approach. These users significantly affect information diffusion, acting as network bridges and amplifying connectivity. Analysis of user profiles suggests a predominant presence of far-right and extremist ideologies among influential users, though the spectrum includes various political inclinations. Figure 4

Figure 4

Figure 4: Number of followers (ρ=0.429\rho = -0.429).

Interestingly, early detection based on user metrics like follower count proved inadequate, underscoring the necessity of dynamic network analysis for accurate influencer identification. Regression models corroborated this, indicating that content and static metrics alone poorly predict influential user ranks.

Discussion

The relaxation of moderation policies has facilitated an increase in hate speech, spurred integration among hate communities, and ironically, intensified rhetoric opposing open discourse principles. The observable augmentation in offensive language usage highlights the regression to explicit hate speech under the lax governance afforded by moderation relaxations.

The increased politicization and targeting of liberals reflect significant shifts in dialogue dynamics. The merging of disparate hateful communities facilitated by key influencer nodes further demonstrates the necessity of balancing open discourse with robust moderation frameworks.

Conclusion

This examination reveals the critical need for platforms to meticulously balance freedom of expression with proactive moderation strategies. While uninhibited speech promotes diverse viewpoints, it can also lead to unchecked proliferation of harmful rhetoric. Platforms should consider mitigating these risks through counter-speech, community-driven moderation, and strategic influencer management. The insights from this study could guide platforms in striking a nuanced balance between fostering free expression and curbing online hate propagation.

Whiteboard

Paper to Video (Beta)

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 472338 likes about this paper.