Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral Consideration (2403.09405v1)

Published 14 Mar 2024 in cs.HC

Abstract: Many studies have identified particular features of artificial intelligences (AI), such as their autonomy and emotion expression, that affect the extent to which they are treated as subjects of moral consideration. However, there has not yet been a comparison of the relative importance of features as is necessary to design and understand increasingly capable, multi-faceted AI systems. We conducted an online conjoint experiment in which 1,163 participants evaluated descriptions of AIs that varied on these features. All 11 features increased how morally wrong participants considered it to harm the AIs. The largest effects were from human-like physical bodies and prosociality (i.e., emotion expression, emotion recognition, cooperation, and moral judgment). For human-computer interaction designers, the importance of prosociality suggests that, because AIs are often seen as threatening, the highest levels of moral consideration may only be granted if the AI has positive intentions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (79)
  1. Abdulaziz Abubshait and Eva Wiese. 2017. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction. Frontiers in Psychology 8 (2017), 1393. https://doi.org/10.3389/fpsyg.2017.01393
  2. Antonella De Angeli and Sheryl Brahnam. 2008. I Hate You! Disinhibition with Virtual Partners. Interacting with Computers 20, 3 (May 2008), 302–310. https://doi.org/10.1016/j.intcom.2008.02.004
  3. Jacy Reese Anthis and Eze Paez. 2021. Moral Circle Expansion: A Promising Strategy to Impact the Far Future. Futures 130 (June 2021), 102756. https://doi.org/10.1016/j.futures.2021.102756
  4. “I Don’t Want To Shoot The Android”: Players Translate Real-Life Moral Intuitions to In-Game Decisions in Detroit: Become Human. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–15. https://doi.org/10.1145/3491102.3502019
  5. The Moral Machine Experiment. Nature 563, 7729 (Nov. 2018), 59–64. https://doi.org/10.1038/s41586-018-0637-6
  6. Conjoint Survey Experiments. In Cambridge Handbook of Advances in Experimental Political Science. 19–41.
  7. Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments. Political Science Research and Methods 9, 1 (Jan. 2021), 53–71. https://doi.org/10.1017/psrm.2019.13
  8. “Daisy, Daisy, Give Me Your Answer Do!” Switching off a Robot. In 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI). 217–222.
  9. Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction. Journal of Human-Robot Interaction 3, 2 (June 2014), 74. https://doi.org/10.5898/JHRI.3.2.Beer
  10. Yoav Benjamini and Yosef Hochberg. 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society: Series B (Methodological) 57, 1 (Jan. 1995), 289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x
  11. An Evaluation of Visual Embodiment for Voice Assistants on Smart Displays. In Proceedings of the 3rd Conference on Conversational User Interfaces (CUI ’21). New York, NY, USA, 1–11. https://doi.org/10.1145/3469595.3469611
  12. Can Moral Rightness (Utilitarian Approach) Outweigh the Ingroup Favoritism Bias in Human-Agent Interaction. In Proceedings of the 10th International Conference on Human-Agent Interaction. ACM, Christchurch New Zealand, 148–156. https://doi.org/10.1145/3527188.3561930
  13. Nadia Chernyak and Heather E. Gary. 2016. Children’s Cognitive and Behavioral Reactions to an Autonomous Versus Controlled Social Robot Dog. Early Education and Development 27, 8 (Nov. 2016), 1175–1189. https://doi.org/10.1080/10409289.2016.1158611
  14. Statistical Methods for Comparing Regression Coefficients Between Models. Amer. J. Sociology 100, 5 (March 1995), 1261–1293. https://doi.org/10.1086/230638
  15. Mark Coeckelbergh. 2021. Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. Minds and Machines 31, 3 (Sept. 2021), 337–360. https://doi.org/10.1007/s11023-020-09554-3
  16. Exploring Prosociality in Human-Robot Teams. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 143–151. https://doi.org/10.1109/HRI.2019.8673299
  17. Robert Dale. 2021. GPT-3: What’s It Good for? Natural Language Engineering 27, 1 (Jan. 2021), 113–118. https://doi.org/10.1017/S1351324920000601
  18. Kate Darling. 2016. Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects. Robot Law (Jan. 2016).
  19. Anthropomorphic Inferences from Emotional Nonverbal Cues: A Case Study. In 19th International Symposium in Robot and Human Interactive Communication. 646–651. https://doi.org/10.1109/ROMAN.2010.5598687
  20. ‘If You Sound like Me, You Must Be More Human’: On the Interplay of Robot and User Features on Human-Robot Acceptance and Anthropomorphism. In 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 125–126. https://doi.org/10.1145/2157689.2157717
  21. Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness. International Journal of Social Robotics 8, 2 (April 2016), 287–302. https://doi.org/10.1007/s12369-016-0338-y
  22. Constrained Choice: Children’s and Adults’ Attribution of Choice to a Humanoid Robot. Cognitive Science 45, 10 (Oct. 2021), e13043. https://doi.org/10.1111/cogs.13043
  23. Luciano Floridi and Massimo Chiriatti. 2020. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines 30, 4 (Dec. 2020), 681–694. https://doi.org/10.1007/s11023-020-09548-1
  24. Markus Freitag. 2021. A Priori Power Analyses for Conjoint Experiments. (Sept. 2021).
  25. Who Wants to Grant Robots Rights?. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’21 Companion). New York, NY, USA, 38–46. https://doi.org/10.1145/3434074.3446911
  26. Dimensions of Mind Perception. Science 315, 5812 (Feb. 2007), 619–619. https://doi.org/10.1126/science.1134475
  27. Kurt Gray and Daniel M. Wegner. 2012. Feeling Robots and Human Zombies: Mind Perception and the Uncanny Valley. Cognition 125, 1 (Oct. 2012), 125–130. https://doi.org/10.1016/j.cognition.2012.06.007
  28. Mind Perception Is the Essence of Morality. Psychological Inquiry 23, 2 (April 2012), 101–124. https://doi.org/10.1080/1047840X.2012.651387
  29. Improving Evaluations of Advanced Robots by Depicting Them in Harmful Situations. Computers in Human Behavior 140 (March 2023), 107565. https://doi.org/10.1016/j.chb.2022.107565
  30. Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments. Political Analysis 22, 1 (2014), 1–30. https://doi.org/10.1093/pan/mpt024
  31. Jamie Harris and Jacy Reese Anthis. 2021. The Moral Consideration of Artificial Entities: A Literature Review. Science and Engineering Ethics 27, 4 (Aug. 2021), 53. https://doi.org/10.1007/s11948-021-00331-8
  32. Playing a Different Imitation Game: Interaction with an Empathic Android Robot. In 2006 6th IEEE-RAS International Conference on Humanoid Robots. 56–61. https://doi.org/10.1109/ICHR.2006.321363
  33. “Should I Follow the Human, or Follow the Robot?” — Robots in Power Can Have More Influence Than Humans on Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–13. https://doi.org/10.1145/3544548.3581066
  34. What Is a Human?: Toward Psychological Benchmarks in the Field of Human–Robot Interaction. Interaction Studies 8, 3 (Jan. 2007), 363–390. https://doi.org/10.1075/is.8.3.04kah
  35. What’s to Bullying a Bot?: Correlates between Chatbot Humanlikeness and Abuse. Interaction Studies 22, 1 (Sept. 2021), 55–80. https://doi.org/10.1075/is.20002.kei
  36. Anthropomorphic Interactions with a Robot and Robot–like Agent. Social Cognition 26, 2 (April 2008), 169–181. https://doi.org/10.1521/soco.2008.26.2.169
  37. A Prisoner’s Dilemma Experiment on Cooperation with People and Human-like Computers. Journal of Personality and Social Psychology 70, 1 (1996), 47–65. https://doi.org/10.1037/0022-3514.70.1.47
  38. Self-Interest and Data Protection Drive the Adoption and Moral Acceptability of Big Data Technologies: A Conjoint Analysis Approach. Computers in Human Behavior 108 (July 2020), 106303. https://doi.org/10.1016/j.chb.2020.106303
  39. Spyros Kokolakis. 2017. Privacy Attitudes and Privacy Behaviour: A Review of Current Research on the Privacy Paradox Phenomenon. Computers & Security 64 (Jan. 2017), 122–134. https://doi.org/10.1016/j.cose.2015.07.002
  40. I Saw It on YouTube! How Online Videos Shape Perceptions of Mind, Morality, and Fears about Robots. New Media & Society 23, 11 (Nov. 2021), 3312–3331. https://doi.org/10.1177/1461444820954199
  41. Ali Ladak. 2023. What Would Qualify an Artificial Intelligence for Moral Standing? AI and Ethics (Jan. 2023). https://doi.org/10.1007/s43681-023-00260-1
  42. Extending Perspective Taking to Nonhuman Animals and Artificial Entities. Social Cognition 41, 3 (June 2023), 274–302. https://doi.org/10.1521/soco.2023.41.3.274
  43. Too Human and Not Human Enough: A Grounded Theory Analysis of Mental Health Harms from Emotional Dependence on the Social Chatbot Replika. New Media & Society (Dec. 2022), 14614448221142007. https://doi.org/10.1177/14614448221142007
  44. What’s on Your Virtual Mind?: Mind Perception in Human-Agent Negotiations. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. ACM, Paris France, 38–45. https://doi.org/10.1145/3308532.3329465
  45. Shane Legg and Marcus Hutter. 2007. A Collection of Definitions of Intelligence. Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms (2007), 17–24.
  46. Collecting the Public Perception of AI and Robot Rights. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 1–24. https://doi.org/10.1145/3415206
  47. Eric Martínez and Christoph Winter. 2021. Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection. Frontiers in Robotics and AI 8 (2021), 367. https://doi.org/10.3389/frobt.2021.788355
  48. Can Computers Be Teammates? International Journal of Human-Computer Studies 45, 6 (Dec. 1996), 669–678. https://doi.org/10.1006/ijhc.1996.0073
  49. Saving the Robot or the Human? Robots Who Feel Deserve Moral Care. Social Cognition 37, 1 (Feb. 2019), 41–S2. https://doi.org/10.1521/soco.2019.37.1.41
  50. Psychology in Human-Robot Communication: An Attempt through Investigation of Negative Attitudes and Anxiety toward Robots. In RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759). 35–40. https://doi.org/10.1109/ROMAN.2004.1374726
  51. Why Do Children Abuse Robots?. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (HRI’15 Extended Abstracts). New York, NY, USA, 63–64. https://doi.org/10.1145/2701973.2701977
  52. Using the Correct Statistical Test for the Equality of Regression Coefficients. Criminology 36, 4 (1998), 859–866. https://doi.org/10.1111/j.1745-9125.1998.tb01268.x
  53. Janet V.T. Pauketat and Jacy Reese Anthis. 2022. Predicting the Moral Consideration of Artificial Intelligences. Computers in Human Behavior 136 (Nov. 2022), 107372. https://doi.org/10.1016/j.chb.2022.107372
  54. Cruel Nature: Harmfulness as an Important, Overlooked Dimension in Judgments of Moral Standing. Cognition 131, 1 (April 2014), 108–124. https://doi.org/10.1016/j.cognition.2013.12.013
  55. Jared Piazza and Steve Loughnan. 2016. When Meat Gets Personal, Animals’ Minds Matter Less: Motivated Use of Intelligence Information in Judgments of Moral Standing. Social Psychological and Personality Science 7, 8 (Nov. 2016), 867–874. https://doi.org/10.1177/1948550616660159
  56. Comparing a Computer Agent with a Humanoid Robot. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction (HRI ’07). New York, NY, USA, 145–152. https://doi.org/10.1145/1228716.1228736
  57. Empathizing with Robots: Fellow Feeling along the Anthropomorphic Spectrum. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. 1–6. https://doi.org/10.1109/ACII.2009.5349423
  58. An Experimental Study on Emotional Reactions Towards a Robot. International Journal of Social Robotics 5, 1 (Jan. 2013), 17–34. https://doi.org/10.1007/s12369-012-0173-8
  59. Tree-Huggers Versus Human-Lovers: Anthropomorphism and Dehumanization Predict Valuing Nature Over Outgroups. Cognitive Science 45, 4 (2021), e12967. https://doi.org/10.1111/cogs.12967
  60. Juliana Schroeder and Nicholas Epley. 2016. Mistaking Minds and Machines: How Speech Affects Dehumanization and Anthropomorphism. Journal of Experimental Psychology: General 145, 11 (2016), 1427–1437. https://doi.org/10.1037/xge0000214
  61. Julian Schuessler and Markus Freitag. 2020. Power Analysis for Conjoint Experiments. Technical Report.
  62. Eric Schwitzgebel and Mara Garza. 2015. A Defense of the Rights of Artificial Intelligences: Defense of the Rights of Artificial Intelligences. Midwest Studies In Philosophy 39, 1 (Sept. 2015), 98–119. https://doi.org/10.1111/misp.12032
  63. Do You Mind? User Perceptions of Machine Consciousness. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–19. https://doi.org/10.1145/3544548.3581296
  64. Daniel B. Shank. 2012. Perceived Justice and Reactions to Coercive Computers1. Sociological Forum 27, 2 (2012), 372–391. https://doi.org/10.1111/j.1573-7861.2012.01322.x
  65. Daniel B. Shank. 2014. Impressions of Computer and Human Agents after Interaction: Computer Identity Weakens Power but Not Goodness Impressions. International Journal of Human-Computer Studies 72, 10 (Oct. 2014), 747–756. https://doi.org/10.1016/j.ijhcs.2014.05.002
  66. Daniel B. Shank and Alyssa DeSanti. 2018. Attributions of Morality and Mind to Artificial Intelligence after Real-World Moral Violations. Computers in Human Behavior 86 (Sept. 2018), 401–411. https://doi.org/10.1016/j.chb.2018.05.014
  67. Measuring Empathy for Human and Robot Hand Pain Using Electroencephalography. Scientific Reports 5, 1 (Nov. 2015), 15924. https://doi.org/10.1038/srep15924
  68. Aleksandra Swiderska and Dennis Küster. 2020. Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism. Cognitive Science 44, 7 (2020), e12872. https://doi.org/10.1111/cogs.12872
  69. Justin Sytsma and Edouard Machery. 2012. The Two Sources of Moral Standing. Review of Philosophy and Psychology 3, 3 (Sept. 2012), 303–324. https://doi.org/10.1007/s13164-012-0102-7
  70. Inducing Bystander Interventions During Robot Abuse with Social Mechanisms. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18). New York, NY, USA, 169–177. https://doi.org/10.1145/3171221.3171247
  71. We Perceive a Mind in a Robot When We Help It. PLOS ONE 12, 7 (July 2017), e0180952. https://doi.org/10.1371/journal.pone.0180952
  72. Herman T. Tavani. 2018. Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights. Information 9, 4 (April 2018), 73. https://doi.org/10.3390/info9040073
  73. Xijing Wang and Eva G. Krumhuber. 2018. Mind Perception of Robots Varies With Their Economic Versus Social Function. Frontiers in Psychology 9 (July 2018), 1230. https://doi.org/10.3389/fpsyg.2018.01230
  74. The Harm-Made Mind: Observing Victimization Augments Attribution of Minds to Vegetative Patients, Robots, and the Dead. Psychological Science 24, 8 (Aug. 2013), 1437–1445. https://doi.org/10.1177/0956797612472343
  75. Who Sees Human?: The Stability and Importance of Individual Differences in Anthropomorphism. Perspectives on Psychological Science 5, 3 (May 2010), 219–232. https://doi.org/10.1177/1745691610369336
  76. The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle. Journal of Experimental Social Psychology 52 (May 2014), 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
  77. Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction. International Journal of Social Robotics 7, 3 (June 2015), 347–360. https://doi.org/10.1007/s12369-014-0267-6
  78. Dimensions of Anthropomorphism : From Humanness to Humanlikeness. In 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 66–73.
  79. Can We Control It? Autonomous Robots Threaten Human Identity, Uniqueness, Safety, and Resources. International Journal of Human-Computer Studies 100 (April 2017), 48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008
Citations (2)

Summary

  • The paper demonstrates that a conjoint experiment reveals how 11 AI attributes affect moral consideration.
  • The study reveals that prosocial behaviors and human-like physical traits have the strongest influence on moral judgments.
  • Findings guide AI design by highlighting which features enhance ethical interaction and reduce abuse risks.

Exploring the Features That Influence Human Moral Consideration of AI: Insights from a Conjoint Experiment

Introduction

The intersection of AI design and moral consideration is a complex domain that requires nuanced understanding. Research has shown that humans attribute varying levels of moral consideration to AI based on certain features, yet a comprehensive analysis comparing these features' relative importance has been lacking. A paper by Ladak et al. (2024) fills this gap, providing valuable insights into how different AI characteristics influence moral consideration.

Study Overview

Ladak et al. conducted an online conjoint experiment with 1,163 participants to explore how 11 distinct AI features influenced moral consideration. These features, chosen through a meticulous process that involved pretesting and literature review, encompass various aspects ranging from autonomy to prosocial behaviors and physical appearance. The paper's primary aim was to understand which AI attributes most significantly affect humans' moral judgments about potentially harming AIs.

Key Findings

The findings indicate that all 11 features had a significant effect on moral consideration, with the strongest impacts attributed to human-like physical bodies and prosociality elements like emotion expression, recognition, cooperation, and moral judgment. Notably, features that facilitate prosocial interaction between humans and AI were found to be particularly influential. This reflects a nuanced understanding of moral consideration wherein entities perceived as posing less threat and more potential for positive interactions garner higher moral concern.

Implications for AI Design and Future Research

The paper's outcomes have profound implications for both theoretical understanding and practical AI design. The identification of prosociality and human-like physical features as primary influencers of moral consideration suggests that AI systems designed with these attributes could foster better quality interactions and potentially mitigate user abuse. However, the decision to incorporate these features should be made with caution, considering the potential psychological distress and ethical dilemmas they might invoke if users attribute unnecessary moral status to non-sentient AIs.

Limitations and Areas for Future Exploration

While the paper sheds light on critical aspects of human-AI moral dynamics, its focus on broadly defined features leaves room for deeper investigation into the nuances within each attribute. Future research could explore more specific variations of these features and their interactions to develop a granular understanding of how they impact moral consideration. Moreover, extending this research beyond the U.S. and incorporating behavior-based assessments could further enrich insights into the global and practical implications of AI design choices on human moral perceptions.

Conclusion

The research by Ladak et al. offers a foundational step towards understanding the complex relationship between AI features and moral consideration. By highlighting the importance of prosociality and human-like physical appearances, the paper provides valuable guidance for AI designers and paves the way for further research into optimizing AI systems for moral and ethical interaction dynamics. This work underlines the conditional nature of human moral consideration for AI, emphasizing the role of perceived intentions and threats in shaping these judgments.