Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Do People Think about Sentient AI? (2407.08867v2)

Published 11 Jul 2024 in cs.AI, cs.CY, cs.ET, and cs.HC

Abstract: With rapid advances in machine learning, many people in the field have been discussing the rise of digital minds and the possibility of artificial sentience. Future developments in AI capabilities and safety will depend on public opinion and human-AI interaction. To begin to fill this research gap, we present the first nationally representative survey data on the topic of sentient AI: initial results from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, a preregistered and longitudinal study of U.S. public opinion that began in 2021. Across one wave of data collection in 2021 and two in 2023 (total N = 3,500), we found mind perception and moral concern for AI well-being in 2021 were higher than predicted and significantly increased in 2023: for example, 71% agree sentient AI deserve to be treated with respect, and 38% support legal rights. People have become more threatened by AI, and there is widespread opposition to new technologies: 63% support a ban on smarter-than-human AI, and 69% support a ban on sentient AI. Expected timelines are surprisingly short and shortening with a median forecast of sentient AI in only five years and artificial general intelligence in only two years. We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction and shape the future trajectory of AI technologies, including existential risks and opportunities.

Summary of "What Do People Think about Sentient AI?"

Introduction

The paper "What Do People Think about Sentient AI?" by Jacy Reese Anthis and colleagues explores public opinion on the topic of sentient AI. Utilizing data from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, which is nationally representative and longitudinal, the paper provides insights into how sentient AI is perceived by the general public in the United States. The paper covers various dimensions such as mind perception, morality, policy preferences, and forecasting of sentient AI timelines.

Key Findings

Mind Perception

The paper explores general mind perception by assessing attributes like analytical thinking, rationality, emotional experiences, and feelings. It was found that AIs are generally perceived as rational and capable of analytical thinking, but less so as experiencing emotions or having feelings. These perceptions have increased from 2021 to 2023.

For LLMs specifically, the perception of mental faculties is lower in comparison to general AI. The attributes assessed include friendliness, situational awareness, human-safe goals, and more. The research suggests a cautious attribution of mental faculties to LLMs, focusing more on cooperative actions rather than self-awareness or independent motivations.

Moral Status

Moral concern for AI is another significant aspect explored in the paper. There is a higher level of moral concern for sentient AIs compared to non-sentient AIs. For instance, 71.1% of respondents agree that sentient AIs deserve to be treated with respect, and this sentiment significantly increases the agreement level when compared to general AI. However, people are more ambivalent about granting legal rights to AIs.

Threat perception is also an area of concern; a substantial majority believe that AIs could potentially harm future generations, and this belief has intensified over the years. The paper corroborates the idea that while people have moral concerns for sentient AIs, they are also wary of potential threats posed by AI advancements.

Policy Support

The paper identifies mixed support for various policy proposals aimed at governing the interaction between humans and sentient AI. The support for banning the development of sentient AI, creating regulations to slow down AI advancements, and implementing welfare standards to protect AIs is widespread.

Public opinion is particularly supportive of regulatory measures, with significant backing for slowing down AI development and instituting bans on technologies that relate to sentience. Notably, 69.5% support a ban on developing sentience in AIs, highlighting a cautious approach toward AI advancements.

Forecasting Sentient AI Timelines

The paper provides intriguing findings on the expected timelines for the emergence of sentient AI. The median forecast suggests that sentient AI may arrive within five years, reflecting an optimistic yet cautious outlook. In addition, there were similarly short timelines predicted for generalized AI capabilities such as artificial general intelligence (AGI) and superintelligence.

Implications and Future Research

The multiple dimensions covered in the paper suggest broad implications for HCI (Human-Computer Interaction) research and practical AI development:

  1. Range of User Reactions: Variations in public opinion across demographics indicate the need for AI systems to adapt to a diverse range of user interactions. Explainable AI (XAI) frameworks could help bridge the gap between user expectations and system capabilities.
  2. Amplifying and Complicating HCI Dynamics: Perception of AI as possessing mental faculties could amplify existing HCI dynamics while also introducing new complexities. Future HCI designs need to incorporate mechanisms that appropriately signal the capabilities and limitations of AI systems.
  3. Regulatory Landscape: The significant public support for regulatory measures reflects societal apprehension about rapid AI advancements. It is vital for policymakers and AI developers to consider these concerns seriously to build trust and ensure ethical AI deployment.
  4. Design Precautions: Designers need to avoid over- or underattributing social, mental, and moral characteristics to AI systems to prevent unrealistic expectations and misuse.

Conclusion

The paper "What Do People Think about Sentient AI?" contributes a significant empirical foundation to the discourse on human-AI interaction, particularly in the context of perceived sentience and moral status. As AI technology continues to evolve, understanding public opinion through rigorous, repeatable surveys like AIMS will be crucial for both theoretical research and practical applications. This paper underscores the importance of thoughtful AI design and governance in shaping the future trajectory of AI technologies, ensuring both opportunities and risks are adequately addressed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jacy Reese Anthis (11 papers)
  2. Janet V. T. Pauketat (3 papers)
  3. Ali Ladak (3 papers)
  4. Aikaterina Manoli (2 papers)