Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making (2404.12558v1)

Published 19 Apr 2024 in cs.HC

Abstract: As LLMs advance to produce human-like arguments in some contexts, the number of settings applicable for human-AI collaboration broadens. Specifically, we focus on subjective decision-making, where a decision is contextual, open to interpretation, and based on one's beliefs and values. In such cases, having multiple arguments and perspectives might be particularly useful for the decision-maker. Using subtle sexism online as an understudied application of subjective decision-making, we suggest that LLM output could effectively provide diverse argumentation to enrich subjective human decision-making. To evaluate the applicability of this case, we conducted an interview study (N=20) where participants evaluated the perceived authorship, relevance, convincingness, and trustworthiness of human and AI-generated explanation-text, generated in response to instances of subtle sexism from the internet. In this workshop paper, we focus on one troubling trend in our results related to opinions and experiences displayed in LLM argumentation. We found that participants rated explanations that contained these characteristics as more convincing and trustworthy, particularly so when those opinions and experiences aligned with their own opinions and experiences. We describe our findings, discuss the troubling role that confirmation bias plays, and bring attention to the ethical challenges surrounding the AI generation of human-like experiences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Cecilia Ovesdotter Alm. 2011. Subjective natural language problems: Motivations, applications, characterizations, and implications. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 107–112.
  2. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems 35 (2022), 38176–38189.
  3. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 7. 2–11.
  4. Does the Whole Exceed Its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 81, 16 pages. https://doi.org/10.1145/3411764.3445717
  5. Language Models are Few-Shot Learners. CoRR abs/2005.14165 (2020). arXiv:2005.14165 https://arxiv.org/abs/2005.14165
  6. Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems. In Proceedings of the 25th international conference on intelligent user interfaces. 454–464.
  7. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–21.
  8. Thematic analysis. Qualitative psychology: A practical guide to research methods 3 (2015), 222–248.
  9. Racial Bias in Hate Speech and Abusive Language Detection Datasets. In Proceedings of the Third Workshop on Abusive Language Online. Association for Computational Linguistics, Florence, Italy, 25–35. https://doi.org/10.18653/v1/W19-3504
  10. Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1639–1656. https://doi.org/10.1145/3531146.3533221
  11. Something Borrowed: Exploring the Influence of AI-Generated Explanation Text on the Composition of Human Explanations. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 253, 7 pages. https://doi.org/10.1145/3544549.3585727
  12. “Trustworthy and Explainable AI” Achieved Through Knowledge Graphs and Social Implementation. Fujit Sci Tech J 56, 1 (2020), 39–45.
  13. Katherine J Hall. 1997. Subtle Sexism: Current Practice and Prospect for Change. Sage Publications.
  14. Katherine J Hall. 2016. ” They believe that because they are women, it should be easier for them.” Subtle and Overt Sexism toward Women in STEM from Social Media Commentary. Virginia Commonwealth University.
  15. Human-AI Complementarity in Hybrid Intelligence Systems: A Structured Literature Review. PACIS (2021), 78.
  16. Generating visual explanations with natural language. Applied AI Letters 2, 4 (2021), e55.
  17. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. 241–250.
  18. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018).
  19. AI-Mediated Communication: How the Perception That Profile Text Was Written by AI Affects Trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300469
  20. Hate speech criteria: A modular approach to task-specific hate speech definitions. arXiv preprint arXiv:2206.15455 (2022).
  21. Personal experiences bridge moral and political divides better than facts. Proceedings of the National Academy of Sciences 118, 6 (2021), e2008389118.
  22. A machine-learning scraping tool for data fusion in the analysis of sentiments about pandemics for supporting business decisions with human-centric AI explanations. PeerJ Computer Science 7 (2021), e713.
  23. Let me explain: Impact of personal and impersonal explanations on trust in recommender systems. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.
  24. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021).
  25. Value-based standards guide sexism inferences for self and others. Journal of Experimental Social Psychology 72 (2017), 101–117. https://doi.org/10.1016/j.jesp.2017.04.006
  26. Raymond S Nickerson. 1998. Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology 2, 2 (1998), 175–220.
  27. Fighting hate speech, silencing drag queens? artificial intelligence in content moderation and risks to lgbtq voices online. Sexuality & Culture 25, 2 (2021), 700–732.
  28. Alexis Palmer and Arthur Spirling. 2023. Large Language Models Can Argue in Convincing and Novel Ways About Politics: Evidence from Experiments and Human Judgement. Technical Report. Working paper), Technical report.
  29. Revisiting Queer Minorities in Lexicons. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH). 245–251.
  30. Understanding the Role of Explanation Modality in AI-assisted Decision-making. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization. 223–233.
  31. Whose Opinions Do Language Models Reflect? arXiv preprint arXiv:2303.17548 (2023).
  32. Ambiguity-aware ai assistants for medical data analysis. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  33. Willingness to Interact Increases When Opponents Offer Specific Evidence. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 44.
  34. Nina Svenningsson and Montathar Faraon. 2019. Artificial intelligence in conversational agents: A study of factors related to perceived humanness in chatbots. In Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference. 151–161.
  35. Language Models Can Generate Human-Like Self-Reports of Emotion. In 27th International Conference on Intelligent User Interfaces (Helsinki, Finland) (IUI ’22 Companion). Association for Computing Machinery, New York, NY, USA, 69–72. https://doi.org/10.1145/3490100.3516464
  36. Identifying Women’s Experiences With and Strategies for Mitigating Negative Effects of Online Harassment. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW ’17). Association for Computing Machinery, New York, NY, USA, 1231–1245. https://doi.org/10.1145/2998181.2998337
  37. Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th International Conference on Intelligent User Interfaces. 318–328.
  38. Measuring and Understanding Trust Calibrations for Automated Systems: A Survey of the State-Of-The-Art and Future Directions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
  39. Yihan Wu and Ryan M. Kelly. 2021. Online Dating Meets Artificial Intelligence: How the Perception of Algorithmically Generated Profile Text Impacts Attractiveness and Trust. In Proceedings of the 32nd Australian Conference on Human-Computer Interaction (Sydney, NSW, Australia) (OzCHI ’20). Association for Computing Machinery, New York, NY, USA, 444–453. https://doi.org/10.1145/3441000.3441074
  40. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295–305.
  41. Effects of influence on user trust in predictive decision making. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sharon Ferguson (8 papers)
  2. Paula Akemi Aoyagui (4 papers)
  3. Young-Ho Kim (36 papers)
  4. Anastasia Kuzminykh (13 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets