Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations (2405.16355v1)
Abstract: Many hyper-personalized AI systems profile people's characteristics (e.g., personality traits) to provide personalized recommendations. These systems are increasingly used to facilitate interactions among people, such as providing teammate recommendations. Despite improved accuracy, such systems are not immune to errors when making inferences about people's most personal traits. These errors manifested as AI misrepresentations. However, the repercussions of such AI misrepresentations are unclear, especially on people's reactions and perceptions of the AI. We present two studies to examine how people react and perceive the AI after encountering personality misrepresentations in AI-facilitated team matching in a higher education context. Through semi-structured interviews (n=20) and a survey experiment (n=198), we pinpoint how people's existing and newly acquired AI knowledge could shape their perceptions and reactions of the AI after encountering AI misrepresentations. Specifically, we identified three rationales that people adopted through knowledge acquired from AI (mis)representations: AI works like a machine, human, and/or magic. These rationales are highly connected to people's reactions of over-trusting, rationalizing, and forgiving of AI misrepresentations. Finally, we found that people's existing AI knowledge, i.e., AI literacy, could moderate people's changes in their trust in AI after encountering AI misrepresentations, but not changes in people's social perceptions of AI. We discuss the role of people's AI knowledge when facing AI fallibility and implications for designing responsible mitigation and repair strategies.
- An artificial intelligence tool for heterogeneous team formation in the classroom. Knowledge-Based Systems 101 (2016), 1–14.
- Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.
- When machine and bandwagon heuristics compete: Understanding users’ response to conflicting AI and crowdsourced fact-checking. Human Communication Research 48, 3 (2022), 430–461.
- Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics 1 (2009), 71–81.
- Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
- Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health 11, 4 (2019), 589–597.
- Is making mistakes human? On the perception of typing errors in chatbot communication. (2021).
- Machine Explanations and Human Understanding. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1–1.
- Understanding the role of human intuition on reliance in human-AI decision-making with explanations. arXiv preprint arXiv:2301.07255 (2023).
- From things to processes: A theory of conceptual change for learning science concepts. Learning and instruction 4, 1 (1994), 27–43.
- Exploring Humor as a Repair Strategy During Communication Breakdowns with Voice Assistants. In Proceedings of the 5th International Conference on Conversational User Interfaces. 1–9.
- When do we accept mistakes from chatbots? The impact of human-like communication on user experience in chatbots that make mistakes. International Journal of Human–Computer Interaction (2023), 1–11.
- Platforms, people, and perception: Using affordances to understand self-presentation on social media. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 740–754.
- DH Dickson and IW Kelly. 1985. The ‘Barnum Effect’in personality assessment: A review of the literature. Psychological reports 57, 2 (1985), 367–382.
- Reading the Room: Automated, Momentary Assessment of Student Engagement in the Classroom: Are We There Yet? Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–26.
- ” Hey Google is it ok if I eat you?” Initial explorations in child-agent interaction. In Proceedings of the 2017 conference on interaction design and children. 595–600.
- Robots in the classroom: Differences in students’ perceptions of credibility and learning between “teacher as robot” and “robot as teacher”. Computers in Human Behavior 65 (2016), 627–634.
- Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In HCI International 2020-Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22. Springer, 449–466.
- First I” like” it, then I hide it: Folk Theories of Social Feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems. 2371–2382.
- User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
- Mental models of AI agents in a cooperative game setting. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–12.
- Takayuki Gompei and Hiroyuki Umemuro. 2015. A robot’s slip of the tongue: Effect of speech error on the familiarity of a humanoid robot. In 2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE, 331–336.
- KnowMe and ShareMe: understanding automatically discovered personality traits from social media and user sharing preferences. In Proceedings of the SIGCHI conference on human factors in computing systems. 955–964.
- Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology 38, 10 (2019), 1004–1015.
- Andrea L Guzman. 2020. Ontological boundaries between humans and computers and the implications for human-machine communication. Human-Machine Communication 1 (2020), 37–54.
- Margeret Hall and Simon Caton. 2017. Am I who I say I am? Unobtrusive self-representation and personality recognition on Facebook. PloS one 12, 9 (2017), e0184417.
- On being told how we feel: how algorithmic sensor feedback influences emotion perception. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 1–31.
- Shanee Honig and Tal Oron-Gilad. 2018. Understanding and resolving failures in human-robot interaction: Literature review and model development. Frontiers in psychology 9 (2018), 861.
- Attitudes surrounding an imperfect AI autograder. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–15.
- You want me to work with who? Stakeholder perceptions of automated team formation in project-based courses. In Proceedings of the 2017 CHI conference on human factors in computing systems. 3201–3212.
- ” Because AI is 100% right and safe”: User attitudes and sources of AI authority in India. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.
- Understanding Users’ Perception Towards Automated Personality Detection with Group-specific Behavioral Data. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
- Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
- A wizard-of-oz study of curiosity in human-robot interaction. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 607–614.
- Gracefully mitigating breakdowns in robotic services. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 203–210.
- Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).
- Tony Liao and Olivia Tyson. 2021. “Crystal Is Creepy, but Cool”: Mapping Folk Theories and Responses to Automated Personality Recognition Algorithms. Social Media+ Society 7, 2 (2021), 20563051211010170.
- Duri Long and Brian Magerko. 2020. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–16.
- Personality matters: Balancing for personality types leads to better outcomes for crowd teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 260–273.
- Owning mistakes sincerely: Strategies for mitigating AI errors. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–11.
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
- To err is robot: How humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI 4 (2017), 21.
- Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. 72–78.
- Soyoung Oh and Eunil Park. 2022. Are you aware of what you are watching? Role of machine heuristic in online content recommendations. arXiv preprint arXiv:2203.08373 (2022).
- Gökhan Özdemir and Douglas B Clark. 2007. An overview of conceptual change theories. Eurasia Journal of Mathematics, Science and Technology Education 3, 4 (2007), 351–361.
- Marc Pinski and Alexander Benlian. 2023. AI Literacy-Towards Measuring Human Competency in Artificial Intelligence. (2023).
- What do they know about me? Contents and concerns of online behavioral profiles. arXiv preprint arXiv:1506.01675 (2015).
- The effect of anthropomorphism and failure comprehensibility on human-robot trust. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 64. SAGE Publications Sage CA: Los Angeles, CA, 107–111.
- How the timing and magnitude of robot errors influence peoples’ trust of robots in an emergency scenario. In Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings 9. Springer, 42–52.
- Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. 141–148.
- Effects of faults, experience, and personality on trust in a robot co-worker. arXiv preprint arXiv:1703.02335 (2017).
- “There is not enough information”: On the effects of explanations on perceptions of informational fairness and trustworthiness in automated decision-making. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1616–1628.
- Donghee Shin. 2022. How do people judge the credibility of algorithmic sources? Ai & Society (2022), 1–16.
- James D Slotta. 2011. In defense of Chi’s ontological incompatibility hypothesis. The Journal of the Learning Sciences 20, 1 (2011), 151–162.
- S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
- S Shyam Sundar and Jinyoung Kim. 2019. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems. 1–9.
- Expressing thought: improving robot readability with animation principles. In Proceedings of the 6th international conference on Human-robot interaction. 69–76.
- How to Trick AI: Users’ strategies for protecting themselves from automatic personality assessment. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–15.
- Sensing affect to empower students: Learner perspectives on affect-sensitive technology in large educational contexts. In Proceedings of the Seventh ACM Conference on Learning@ Scale. 63–76.
- Towards mutual theory of mind in human-ai interaction: How language reflects what students perceive about a virtual teaching assistant. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–14.
- Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th international conference on intelligent user interfaces. 318–328.
- Can an Algorithm Know the” Real You”? Understanding People’s Reactions to Hyper-personal Analytics Systems. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 797–806.
- Who should be my teammates: Using a conversational agent to understand individuals and help teaming. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 437–447.