From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora
Abstract: Social media platforms are online fora where users engage in discussions, share content, and build connections. This review explores the dynamics of social interactions, user-generated contents, and biases within the context of social media analysis (analyzing works that use the tools offered by complex network analysis and natural language processing) through the lens of three key points of view: online debates, online support, and human-AI interactions. On the one hand, we delineate the phenomenon of online debates, where polarization, misinformation, and echo chamber formation often proliferate, driven by algorithmic biases and extreme mechanisms of homophily. On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms. Online debates and support mechanisms present a duality of both perils and possibilities within social media; perils of segregated communities and polarized debates, and possibilities of empathy narratives and self-help groups. This dichotomy also extends to a third perspective: users' reliance on AI-generated content, such as the ones produced by LLMs, which can manifest both human biases hidden in training sets and non-human biases that emerge from their artificial neural architectures. Analyzing interdisciplinary approaches, we aim to deepen the understanding of the complex interplay between social interactions, user-generated content, and biases within the realm of social media ecosystems.
- Robert P. Abelson and J. Douglas Carroll. 1965. Computer Simulation of Individual Belief Systems. American Behavioral Scientist 8, 9 (May 1965), 24–30. https://doi.org/10.1177/000276426500800908
- Voices of rape: Cognitive networks link passive voice usage to psychological distress in online narratives. (2023).
- Cognitive network science reveals bias in gpt-3, gpt-3.5 turbo, and gpt-4 mirroring math anxiety in high-school students. Big Data and Cognitive Computing 7, 3 (2023), 124.
- Alberto Acerbi and Joseph M Stubbersfield. 2023. Large language models show human-like content biases in transmission chain experiments. Proceedings of the National Academy of Sciences 120, 44 (2023), e2313790120.
- Social support, reciprocity, and anonymity in responses to sexual abuse disclosures on social media. ACM Transactions on Computer-Human Interaction (TOCHI) 25, 5 (2018), 1–35.
- Sensitive Self-disclosures, Responses, and Social Support on Instagram: the case of# depression. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 1485–1500.
- Marianna Apidianaki. 2022. From word types to tokens and back: A survey of approaches to word meaning representation and interpretation. Computational Linguistics (2022), 1–60.
- Motivation and barriers to participation in virtual knowledge-sharing communities of practice. Journal of knowledge management 7, 1 (2003), 64–77.
- Modelling spirals of silence and echo chambers by learning from the feedback of others. Entropy 24, 10 (2022), 1484.
- Tweeting from left to right: Is online political communication more than an echo chamber? Psychological science 26, 10 (2015), 1531–1542.
- Simon Baron-Cohen. 2000. The evolution of a theory of mind. Oxford University Press, 261–277. https://doi.org/10.1093/acprof:oso/9780192632593.003.0013
- Bert Baumgaertner and Florian Justwan. 2022. The preference for belief, issue polarization, and echo chambers. Synthese 200, 5 (2022), 412.
- Endre Begby. 2022. From belief polarization to echo chambers: A rationalizing account. Episteme (2022), 1–21.
- How semantic memory structure and intelligence contribute to creative thought: A network science approach. Thinking & Reasoning 23, 2 (2017), 158–183.
- Alexandre Bovet and Hernán A Makse. 2019. Influence of fake news in Twitter during the 2016 US presidential election. Nature communications 10, 1 (2019), 7.
- The persuasive power of large language models. arXiv preprint arXiv:2312.15523 (2023).
- Do conversational agents have a theory of mind? A single case study of ChatGPT with the Hinting, False Beliefs and False Photographs, and Strange Stories paradigms. (2023).
- The digital revolution and its impact on mental health care. Psychology and Psychotherapy: Theory, Research and Practice 92, 2 (2019), 277–297.
- Henna Budhwani and Ruoyan Sun. 2020. Creating COVID-19 stigma by referencing the novel coronavirus as the “Chinese virus” on Twitter: quantitative analysis of social media data. Journal of Medical Internet Research 22, 5 (2020), e19301.
- Will You Take the Knee? Italian Twitter Echo Chambers’ Genesis During EURO 2020. In International Conference on Complex Networks and Their Applications. Springer, 29–40.
- The role of bot squads in the political propaganda on Twitter. Communications Physics 3, 1 (2020), 81.
- Aylin Caliskan and Molly Lewis. 2020. Social biases in word embeddings and their relation to human cognition. (2020).
- Statistical physics of social dynamics. Reviews of modern physics 81, 2 (2009), 591.
- Stephenie R Chaudoir and Jeffrey D Fisher. 2010. The disclosure processes model: understanding disclosure decision making and postdisclosure outcomes among people living with a concealable stigmatized identity. Psychological bulletin 136, 2 (2010), 236.
- Yixin Chen and Yang Xu. 2021. Exploring the effect of social support and empathy on user engagement in online mental health communities. International Journal of Environmental Research and Public Health 18, 13 (2021), 6855.
- Simulating Opinion Dynamics with Networks of LLM-based Agents. arXiv preprint arXiv:2311.09618 (2023).
- The echo chamber effect on social media. Proceedings of the National Academy of Sciences 118, 9 (2021), e2023301118.
- ΔΔ\Deltaroman_Δ-Conformity: multi-scale node assortativity in feature-rich stream graphs. International Journal of Data Science and Analytics (2022), 1–12.
- Feature-rich multiplex lexical networks reveal mental strategies of early language learning. Scientific Reports 13, 1 (2023), 1474.
- Hypergraph models of the mental lexicon capture greater information than pairwise networks for predicting language learning. New Ideas in Psychology 71 (2023), 101034.
- Harry Collins. 2007. Bicycling on the moon: Collective tacit knowledge and somatic-limit tacit knowledge. Organization Studies 28, 2 (2007), 257–262.
- Falling into the echo chamber: The italian vaccination debate on twitter. In Proceedings of the International AAAI conference on web and social media, Vol. 14. 130–140.
- Paul C Cozby. 1973. Self-disclosure: a literature review. Psychological bulletin 79, 2 (1973), 73.
- Knowing me, knowing you: theory of mind in AI. Psychological Medicine 50, 7 (May 2020), 1057–1061. https://doi.org/10.1017/s0033291720000835
- Munmun De Choudhury and Emre Kiciman. 2017. The language of social support in social media and its effect on suicidal ideation risk. In Eleventh International AAAI Conference on Web and Social Media.
- Emergence of scale-free networks in social interactions among large language models. arXiv preprint arXiv:2312.06619 (2023).
- Mixing beliefs among interacting agents. Advances in Complex Systems 3 (2001), 11.
- Linguistic markers indicating therapeutic outcomes of social media disclosures of schizophrenia. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 1–27.
- Attributed Stream Hypergraphs: temporal modeling of node-attributed high-order interactions. Applied Network Science 8, 1 (2023), 1–19.
- Empath: Understanding topic signals in large-scale text. In Proceedings of the 2016 CHI conference on human factors in computing systems. 4647–4657.
- A survey on multimodal disinformation detection. In Proceedings of the 29th International Conference on Computational Linguistics.
- Limits of Large Language Models in Debating Humans. arXiv preprint arXiv:2402.06049 (2024).
- S3: Social-network Simulation System with Large Language Model-Empowered Agents. arXiv preprint arXiv:2307.14984 (2023).
- Quantifying controversy on social media. ACM Transactions on Social Computing 1, 1 (2018), 1–27.
- Understanding echo chambers in e-commerce recommender systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2261–2270.
- Erving Goffman. 2009. Stigma: Notes on the management of spoiled identity. Simon and schuster.
- Homophily and acrophily as drivers of political segregation. Nature Human Behaviour 7, 2 (2023), 219–230.
- Attraction to Politically Extreme Users on Social Media. (March 2023). https://doi.org/10.31219/osf.io/cmx4p
- Patient engagement: A consumer-centered model to innovate healthcare. Walter de Gruyter GmbH & Co KG.
- Peter Hernon. 1995. Disinformation and misinformation through the internet: Findings of an exploratory study. Government information quarterly 12, 2 (1995), 133–139.
- Communicating with algorithms: A transfer entropy analysis of emotions-based escapes from online echo chambers. Communication Methods and Measures 12, 4 (2018), 260–275.
- Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the international AAAI conference on web and social media, Vol. 8. 216–225.
- Cognitive network science quantifies feelings expressed in suicide letters and Reddit mental health communities. arXiv preprint arXiv:2110.15269 (2021).
- Social support and health. Medical care 15, 5 (1977), 47–58.
- Fariba Karimi and Marcos Oliveira. 2023. On the inadequacy of nominal assortativity for assessing homophily in networks. Scientific Reports 13, 1 (2023), 21053.
- Nima Kordzadeh and Maryam Ghasemaghaei. 2022. Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems 31, 3 (2022), 388–409.
- Michal Kosinski. 2023. Theory of Mind Might Have Spontaneously Emerged in Large Language Models. https://doi.org/10.48550/ARXIV.2302.02083
- Yan Leng and Yuan Yuan. 2023. Do LLM Agents Exhibit Social Behavior? arXiv preprint arXiv:2312.15198 (2023).
- Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding. 48–55.
- Michael Lynch. 2013. At the margins of tacit knowledge. Philosophia Scientiæ. Travaux d’histoire et de philosophie des sciences 17-3 (2013), 55–73.
- Michael Maes and Lukas Bischofberger. 2015. Will the Personalization of Online Social Networks Foster Opinion Polarization? Available at SSRN 2553436 2553436 (2015).
- Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047 (2019).
- Birds of a feather: Homophily in social networks. Annual review of sociology 27, 1 (2001), 415–444.
- Melanie Mitchell and David C Krakauer. 2023. The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences 120, 13 (2023), e2215907120.
- Fathali M Moghaddam. 2005. The staircase to terrorism: A psychological exploration. American psychologist 60, 2 (2005), 161.
- Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational intelligence 29, 3 (2013), 436–465.
- The language of opinion change on social media under the lens of communicative action. Scientific Reports 12, 1 (2022), 17920.
- Who can help me? Reconstructing users’ psychological journeys in depression-related social media interactions. arXiv preprint arXiv:2311.17684 (2023).
- Toward a Standard Approach for Echo Chamber Detection: Reddit Case Study. Applied Sciences 11, 12 (2021), 5390.
- Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation. arXiv preprint arXiv:2402.16333 (2024).
- Preslav Nakov and Giovanni Da San Martino. 2020. Fact-checking, fake news, propaganda, and media bias: Truth seeking in the post-truth era. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts. 7–19.
- Number detectors spontaneously emerge in a deep neural network designed for visual object recognition. Science Advances 5, 5 (May 2019). https://doi.org/10.1126/sciadv.aav7903
- Mark EJ Newman. 2003. Mixing patterns in networks. Physical review E 67, 2 (2003), 026126.
- C Thi Nguyen. 2020a. Cognitive islands and runaway echo chambers: Problems for epistemic dependence on experts. Synthese 197, 7 (2020), 2803–2821.
- C Thi Nguyen. 2020b. Echo chambers and epistemic bubbles. Episteme 17, 2 (2020), 141–161.
- Brendan Nyhan. 2021. Why the backfire effect does not explain the durability of political misperceptions. Proceedings of the National Academy of Sciences 118, 15 (2021), e1912440117.
- From mean-field to complex topologies: network effects on the algorithmic bias model. In Complex Networks & Their Applications X: Volume 2, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021 10. Springer, 329–340.
- Mass media impact on opinion evolution in biased digital environments: a bounded confidence model. Scientific Reports 13, 1 (2023), 14600.
- Development of second-order theory of mind: Assessment of environmental influences using a dynamic system approach. International Journal of Behavioral Development 43, 3 (Feb. 2019), 245–254. https://doi.org/10.1177/0165025418824052
- Eli Pariser. 2011. The filter bubble: What the Internet is hiding from you. penguin UK.
- Multiscale mixing patterns in networks. Proceedings of the National Academy of Sciences 115, 16 (2018), 4057–4062.
- Josef Perner. 1993. Understanding the Representational Mind. The MIT Press. https://doi.org/10.7551/mitpress/6988.001.0001
- David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 1, 4 (Dec. 1978), 515–526. https://doi.org/10.1017/s0140525x00076512
- Bernard Rimé. 2009. Emotion elicits the social sharing of emotion: Theory and empirical review. Emotion review 1, 1 (2009), 60–85.
- Conformity: a path-aware homophily measure for node-attributed networks. IEEE Intelligent Systems 36, 1 (2021), 25–34.
- Breno RG Santos. 2021. Echo chambers, ignorance and domination. Social epistemology 35, 2 (2021), 109–119.
- Whose Opinions Do Language Models Reflect? (2023). arXiv:2303.17548 [cs.CL]
- Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs. https://doi.org/10.48550/ARXIV.2210.13312
- Opinion dynamics: models, extensions and external effects. Participatory sensing, opinions and collective awareness (2017), 363–401.
- Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS one 14, 3 (2019), e0213246.
- Massimo Stella. 2020. Text-mining forma mentis networks reconstruct public perception of the STEM gender gap in social media. PeerJ Computer Science 6 (2020), e295.
- The social support behavior code (SSBC). Couple observational coding systems (2004), 311–318.
- Cass Sunstein and Cass R Sunstein. 2018. # Republic. Princeton university press.
- Systematic biases in LLM simulations of debates. arXiv preprint arXiv:2402.04049 (2024).
- Opinion dynamic modeling of fake news perception. In Complex Networks & Their Applications IX: Volume 1, Proceedings of the Ninth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2020. Springer, 370–381.
- Simulating social media using large language models to evaluate alternative news feed algorithms. arXiv preprint arXiv:2310.05984 (2023).
- John Torous and Matcheri Keshavan. 2016. The role of social media in schizophrenia: evaluating risks, benefits, and potential. Current opinion in psychiatry 29, 3 (2016), 190–195.
- Tomer David Ullman. 2023. Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks. ArXiv abs/2302.08399 (2023). https://api.semanticscholar.org/CorpusID:256900823
- Jacquelien Van Stekelenburg. 2014. Going all the way: Politicizing, polarizing, and radicalizing identity offline and online. Sociology Compass 8, 5 (2014), 540–555.
- Can Large Language Model Agents Simulate Human Trust Behaviors? arXiv preprint arXiv:2402.04559 (2024).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.