Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Peripatetic Hater: Predicting Movement Among Hate Subreddits (2405.17410v2)

Published 27 May 2024 in cs.SI, cs.CY, and cs.HC

Abstract: Many online hate groups exist to disparage others based on race, gender identity, sex, or other characteristics. The accessibility of these communities allows users to join multiple types of hate groups (e.g., a racist community and a misogynistic community), raising the question of whether users who join additional types of hate communities could be further radicalized compared to users who stay in one type of hate group. However, little is known about the dynamics of joining multiple types of hate groups, nor the effect of these groups on peripatetic users. We develop a new method to classify hate subreddits and the identities they disparage, then apply it to understand better how users come to join different types of hate subreddits. The hate classification technique utilizes human-validated deep learning models to extract the protected identities attacked, if any, across 168 subreddits. We find distinct clusters of subreddits targeting various identities, such as racist subreddits, xenophobic subreddits, and transphobic subreddits. We show that when users become active in their first hate subreddit, they have a high likelihood of becoming active in additional hate subreddits of a different category. We also find that users who join additional hate subreddits, especially those of a different category develop a wider hate group lexicon. These results then lead us to train a deep learning model that, as we demonstrate, usefully predicts the hate categories in which users will become active based on post text replied to and written. The accuracy of this model may be partly driven by peripatetic users often using the language of hate subreddits they eventually join. Overall, these results highlight the unique risks associated with hate communities on a social media platform, as discussion of alternative targets of hate may lead users to target more protected identities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Beyond Fish and Bicycles: Exploring the Varieties of Online Women’s Ideological Spaces. In WebSci, 43–54. ACM.
  2. Barcellona, M. 2022. Incel violence as a new terrorism threat: A brief investigation between Alt-Right and Manosphere dimensions. Sortuz: Oñati Journal of Emergent Socio-Legal Studies, 11(2): 170–186.
  3. The Pushshift Reddit Dataset. CoRR, abs/2001.08435.
  4. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
  5. Bezio, K. M. 2018. Ctrl-Alt-Del: GamerGate as a precursor to the rise of the alt-right. Leadership, 14(5): 556–566.
  6. Bound Alberti, F. 2021. Fat shaming, feminism and Facebook: What ‘women who eat on tubes’ reveal about social media and the boundaries of women’s bodies. European Journal of Cultural Studies, 24(6): 1304–1318.
  7. Brock, A. 2015. Whiteness as digital imaginary: SJW as boundary object. AoIR Selected Papers of Internet Research.
  8. Center, S. P. L. 2023. Alt-right. https://www.splcenter.org/fighting-hate/extremist-files/ideology/alt-right.
  9. You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech. CSCW, 1(CSCW).
  10. Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels. Science Advances, 9(35): eadd8080.
  11. Dafaure, M. 2020. The “great meme war:” The alt-right and its multifarious enemies. Angles. New Perspectives on the Anglophone World, (10).
  12. “I think most of society hates us”: A qualitative thematic analysis of interviews with incels. Sex Roles, 86(1-2): 14–33.
  13. Extracting inter-community conflicts in Reddit. In ICWSM, volume 13, 146–157.
  14. Multi-label classification of member participation in online innovation communities. European Journal of Operational Research, 270(2): 761–774.
  15. Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers and Hostile Intergroup Interactions on Reddit. In ICWSM, volume 17, 197–208.
  16. Sparse additive generative models of text. In ICML-11, 1041–1048.
  17. FORCE11. 2020. The FAIR Data principles. https://force11.org/info/the-fair-data-principles/.
  18. Caveat emptor, computational social science: Large-scale missing data in a widely-published Reddit corpus. PloS one, 13(7): e0200162.
  19. Grootendorst, M. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794.
  20. Are Proactive Interventions for Reddit Communities Feasible? arXiv preprint arXiv:2111.11019.
  21. Making a Radical Misogynist: How online social engagement with the Manosphere influences traits of radicalization. Proceedings of the ACM on human-computer interaction, 6(CSCW2): 1–28.
  22. Answering the call for a standard reliability measure for coding data. Communication methods and measures, 1(1): 77–89.
  23. Uncertainty and the Roots of Extremism. Journal of Social Issues, 69(3): 407–418.
  24. Hotine, E. 2021. Biology, Society and Sex: Deconstructing anti-trans rhetoric and trans-exclusionary radical feminism. Journal of the Nuffield Department of Surgical Sciences, 2(3).
  25. The use of social media by United States extremists. National Consortium for the Study of Terrorism and Responses to Terrorism (START), 1–10.
  26. Predicting continuance in online communities: model development and empirical test. Behaviour & Information Technology, 29(4): 383–394.
  27. The toilet debate: Stalling trans possibilities and defending ‘women’s protected spaces’. The Sociological Review, 68(4): 834–851.
  28. Predicting continued participation in newsgroups. Journal of Computer-Mediated Communication, 11(3): 723–747.
  29. The psychology of radicalization and deradicalization: How significance quest impacts violent extremism. Political Psychology, 35: 69–93.
  30. Community interaction and conflict on the web. In WWW, 933–943.
  31. Harnessing Artificial Intelligence to Combat Online Hate: Exploring the Challenges and Opportunities of Large Language Models in Hate Speech Detection. arXiv preprint arXiv:2403.08035.
  32. Lin, J. L. 2017. Antifeminism online. MGTOW (men going their own way). transcript.
  33. The subtle language of exclusion: Identifying the Toxic Speech of Trans-exclusionary Radical Feminists. In WOAH, 79–91.
  34. Are anti-feminist communities gateways to the far right? evidence from Reddit and Youtube. In WebSci, 139–147.
  35. Drinking male tears: Language, the manosphere, and networked harassment. Feminist media studies, 18(4): 543–559.
  36. of Investigation, F. B. 2021. Federal Bureau of Investigation Crime Data Explorer. https://cde.ucr.cjis.gov/LATEST/webapp/“#/pages/explorer/crime/crime-trend.
  37. Pathways through conspiracy: the evolution of conspiracy radicalization through engagement in online conspiracy discussions. In ICWSM, volume 16, 770–781.
  38. The evolution of the manosphere across the web. In ICWSM, volume 15, 196–207.
  39. Auditing radicalization pathways on YouTube. In FAccT, 131–141.
  40. Communities, gateways, and bridges: Measuring attention flow in the Reddit political sphere. In SocInfo, 3–19. Springer.
  41. Conceptualizations of race: Essentialism and constructivism. Annual Review of Sociology, 49: 39–58.
  42. Stranger Danger! Cross-Community Interactions with Fringe Users Increase the Growth of Fringe Communities on Reddit. arXiv preprint arXiv:2310.12186.
  43. Spillover of antisocial behavior from fringe platforms: The unintended consequences of community banning. In ICWSM, volume 17, 742–753.
  44. The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism. In Abercrombie, G.; Basile, V.; Tonelli, S.; Rieser, V.; and Uma, A., eds., Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022, 83–94. Marseille, France: European Language Resources Association.
  45. Quantifying How Hateful Communities Radicalize Online Users. In ASONAM, 139–146.
  46. Do users adopt extremist beliefs from exposure to hate subreddits? Social Network Analysis and Mining, 14(1): 22.
  47. Dying to be popular: A purposive explanation of adolescent willingness to endure harm. Extremism and the psychology of uncertainty, 113–130.
  48. Tiffany, K. 2020. The secret internet of TERFs. The Atlantic.
  49. Introducing CAD: the Contextual Abuse Dataset. In NAACL, 2289–2303.
  50. Introducing CAD: the Contextual Abuse Dataset. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2289–2303. Online: Association for Computational Linguistics.
  51. Continuous time dynamic topic models. arXiv preprint arXiv:1206.3298.
  52. Analyzing and predicting user participations in online health communities: a social support perspective. Journal of medical Internet research, 19(4): e6834.
  53. Williams, C. 2020. The ontological woman: A history of deauthentication, dehumanization, and violence. The Sociological Review, 68(4): 718–734.
  54. Winter, A. 2019. Online hate: from the far-right to the ‘alt-right’ and from the margins to the mainstream. Online othering: Exploring digital violence and discrimination on the web, 39–63.
  55. On the origins of memes by means of fringe web communities. In IMC, 188–202.
  56. On the explainability of natural language processing deep models. ACM Computing Surveys, 55(5): 1–31.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Daniel Hickey (4 papers)
  2. Daniel M. T. Fessler (3 papers)
  3. Kristina Lerman (197 papers)
  4. Keith Burghardt (45 papers)

Summary

We haven't generated a summary for this paper yet.