Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

A theory of appropriateness with applications to generative artificial intelligence (2412.19010v1)

Published 26 Dec 2024 in cs.AI

Abstract: What is appropriateness? Humans navigate a multi-scale mosaic of interlocking notions of what is appropriate for different situations. We act one way with our friends, another with our family, and yet another in the office. Likewise for AI, appropriate behavior for a comedy-writing assistant is not the same as appropriate behavior for a customer-service representative. What determines which actions are appropriate in which contexts? And what causes these standards to change over time? Since all judgments of AI appropriateness are ultimately made by humans, we need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it. This paper presents a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.

Summary

  • The paper proposes a comprehensive theory of appropriateness, detailing its context-dependence, arbitrariness, automaticity, dynamism, and maintenance through sanctioning.
  • It applies this theory to generative AI design, suggesting approaches like decentralized norm customization and learning from social sanctions to improve AI behavior.
  • Key challenges for implementing this theory in AI include achieving deep contextual understanding, enabling dynamic learning of norms, and ensuring cultural sensitivity without bias.

A Theory of Appropriateness with Applications to Generative Artificial Intelligence

The paper "A Theory of Appropriateness with Applications to Generative Artificial Intelligence" proposes a comprehensive framework for understanding appropriateness in human social behavior and applies this framework to the domain of AI, specifically generative AI systems. The authors, Joel Z. Leibo et al., from Google DeepMind and other institutions, delve into the complex interplay between social norms, context, and identity, and how these factors shape notions of appropriateness. They emphasize that generative AI systems must be designed not merely to align with human values in a broad sense but to incorporate a nuanced understanding of what constitutes appropriate behavior across different contexts and communities.

Key Components of the Theory

The paper delineates the concept of appropriateness through several core aspects:

  1. Context-Dependence: Appropriateness varies with the situation, social identity, and culture. This highlights the challenges for AI systems, which must understand and adapt to different contexts and identities dynamically.
  2. Arbitrariness: Many social norms are arbitrary and culturally specific. The same action might be appropriate in one culture but inappropriate in another. For AI, this suggests the need for cultural sensitivity and adaptability.
  3. Automaticity: Human behavior, governed by appropriateness, often operates automatically, without deliberate reasoning. For AI, this implies the necessity for systems to have rapid, context-aware responses that mimic human-like automaticity.
  4. Dynamism: Social norms and appropriateness evolve rapidly. AI systems need mechanisms to learn and update their behavior continually to remain relevant and appropriate.
  5. Sanctioning: Appropriateness is maintained through social sanctions—expressions of approval or disapproval. AI systems can learn appropriateness by observing and simulating sanctioning processes in human interactions.

Implications for AI Design

The theory provides a framework for developing AI systems that integrate an understanding of human social norms and appropriateness into their operation. This involves several innovative approaches:

  • Decentralized Norm Customization: Encouraging a decentralized ecosystem where different communities can tailor AI systems to reflect their specific norms, rather than relying on a one-size-fits-all approach. This respects cultural and contextual diversity.
  • Sanction Sensitivity: Implementing algorithms that allow AI to learn from human sanctions, much like humans do, could enable systems to adapt more effectively to community norms.
  • Polycentric Governance: Emphasizing a governance model where various stakeholders can influence AI system behaviors, ensuring that generative AI respects the diverse expectations of different communities.

Challenges and Future Directions

The paper acknowledges the challenges in operationalizing such a nuanced theory of appropriateness in AI systems. Key challenges include:

  • Contextual Understanding: AI systems currently lack deep contextual awareness, which is crucial for assessing appropriateness accurately.
  • Dynamic Learning: Developing AI that can learn and adjust its behavior continually in response to new types of social feedback is an ongoing research challenge.
  • Cultural Sensitivity: Ensuring AI systems do not inadvertently reinforce stereotypes or biases due to their training data or operational logic requires careful oversight.

The authors suggest that future research should focus on improving AI's capability to interpret complex human social cues and to incorporate multi-faceted feedback from diverse stakeholder groups. Furthermore, fostering a more integrated approach to AI governance, which includes input from a wide range of cultural perspectives, will be crucial for developing systems that behave appropriately in varied social landscapes.

In summary, the paper provides a detailed theoretical framework and practical implications for embedding a rich understanding of human appropriateness in generative AI systems, pointing towards a nuanced approach to AI design that respects the complexity of human social dynamics.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.