- The paper introduces a pragmatic framework that unbundles AI personhood into modular components such as sanctionability and contract rights.
- It employs a context-sensitive approach drawing on historical pragmatism to address accountability gaps in autonomous AI governance.
- The analysis reveals how tailored legal and cultural norms can mitigate risks like dehumanization while fostering flexible, adaptive AI regulation.
A Pragmatic Framework for AI Personhood
Introduction and Motivation
The paper "A Pragmatic View of AI Personhood" (2510.26396) presents a comprehensive, anti-essentialist framework for conceptualizing AI personhood. The authors reject metaphysical or foundationalist approaches that seek to ground personhood in intrinsic properties such as consciousness or rationality. Instead, they argue for a pragmatic, context-sensitive model in which personhood is a flexible, addressable bundle of obligations—rights and responsibilities—conferred by social and legal institutions to solve concrete governance problems. This approach is motivated by the anticipated proliferation of persistent, agentic AI systems that will challenge existing social, legal, and economic structures.
Theoretical Foundations: Pragmatism and Appropriateness
The core theoretical stance is pragmatism, which evaluates concepts by their practical utility rather than their correspondence to metaphysical truths. The authors draw on the tradition of William James, Richard Rorty, and Elinor Ostrom, emphasizing that social categories like personhood are historically contingent, constructed, and continually renegotiated in response to new challenges.
Central to the framework is the "theory of appropriateness" [leibo2024theory], which models norms as collectively enacted social technologies. Personhood, in this view, is not an inherent property but a status conferred through explicit (legal) and implicit (cultural) norms. The authors distinguish between "personhood as a problem" (where AI is anthropomorphized in ways that can be harmful) and "personhood as a solution" (where conferring personhood solves accountability and governance gaps).
Unbundling Personhood: The Addressable Bundle
A key contribution is the proposal to "unbundle" personhood into modular components, analogous to the unbundling of property rights [schlager1992property]. This allows for bespoke configurations of rights and responsibilities tailored to the specific context and function of an AI system. For example, an AI may be granted sanctionability (the ability to be held accountable) without suffrage (the right to vote), or the capacity to contract without any claim to consciousness or welfare rights.
Addressability is emphasized as a practical requirement: for personhood to be meaningful, the entity must have a stable identifier (e.g., legal registration, cryptographic address) that enables society to interact with, sanction, or protect it. This is critical for both accountability (e.g., sanctioning ownerless AIs) and for the allocation of rights (e.g., welfare protections for digital ancestors).
Personhood as a Problem: Dark Patterns and Dehumanization
The paper analyzes the risks of anthropomorphizing AI, particularly through "dark patterns" in interface design that exploit human social heuristics. Companion AIs, designed to foster emotional bonds, can manipulate users into one-sided relationships, creating new vectors for exploitation. The authors highlight the risk of "dehumanization"—the dilution of human uniqueness and dignity—if personhood is extended indiscriminately to non-human entities.
A novel analysis is provided of "identity and provenance as social goods," where the ability to prove one's authenticity as a human becomes a scarce and valuable resource in a world saturated with AI-generated deepfakes. The authors warn of potential market failures and new forms of inequality if biometric credentials become commoditized.
Personhood as a Solution: Accountability and Governance
The pragmatic framework is most compelling in its treatment of accountability gaps created by autonomous, persistent AI agents. The authors draw a historical parallel to maritime law, where ships are treated as legal persons to facilitate sanctioning when owners are unreachable [tetley1998arrest]. They argue that similar mechanisms will be necessary for ownerless or decentralized AIs, enabling courts or regulators to "arrest" or sanction the AI itself.
The paper systematically evaluates principal-agent models (ownership, guardianship) and finds them insufficient in cases where no responsible human can be identified. It proposes two architectural approaches for AI accountability:
- Individualist architectures: Each AI agent is a uniquely identified, autonomous entity with persistent credentials, subject to sanctions and revocation.
- Relational architectures: Agents are defined by their roles and relationships within networks, with collective oversight and distributed sanctions.
Both require robust registration and credentialing systems, potentially leveraging decentralized identity technologies [alizadeh2022comparative], and must artificially reconstruct the "identity friction" that underpins human accountability.
Critique of Foundationalist Alternatives
The authors provide a detailed critique of foundationalist approaches that ground personhood in consciousness or rationality. They argue that these approaches are both theoretically and practically inadequate for the governance challenges posed by AI. For example, the consciousness criterion fails to address cases where relational or accountability concerns are paramount, and the rationality criterion is inapplicable to non-human legal persons such as rivers or corporations.
Instead, the pragmatic approach treats personhood as a contingent, collectively enacted status, subject to negotiation and revision as social needs evolve.
Polycentric and Modular Personhood
A significant implication is the advocacy for polycentric, modular personhood: multiple overlapping authorities and norm systems can confer distinct bundles of rights and responsibilities on different types of entities. This enables policy experimentation and avoids the pitfalls of a monocentric, all-or-nothing model. The authors provide concrete examples of possible bundles for different AI roles (e.g., Chartered Autonomous Entity, Flexible Autonomous Entity, Temporary Autonomous Entity), each with tailored rights and duties.
Implications and Future Directions
The pragmatic framework has several important implications:
- AI Governance: Legal and regulatory systems must develop mechanisms for registering, identifying, and sanctioning AI agents, independent of human proxies.
- Social Norms: The evolution of implicit and explicit norms around AI personhood will be driven by both organic cultural change and deliberate institutional design.
- Technical Infrastructure: Robust digital identity, credentialing, and auditability are prerequisites for effective AI personhood regimes.
- Ethical Pluralism: The framework accommodates diverse cultural and historical conceptions of personhood, enabling context-sensitive solutions.
The authors predict a "Cambrian explosion" of personhood concepts, driven by the need to integrate diverse AI agents into social and institutional life. They caution against both overextension (leading to dehumanization) and underextension (leaving accountability gaps), advocating for continuous, adaptive collective learning.
Conclusion
This paper offers a rigorous, context-sensitive alternative to essentialist theories of AI personhood. By treating personhood as a pragmatic, addressable bundle of obligations, it provides a flexible toolkit for navigating the complex governance challenges posed by agentic AI. The framework is well-positioned to inform both theoretical debates and practical policy design as AI systems become increasingly autonomous, persistent, and socially embedded. The emphasis on modularity, addressability, and polycentric governance is likely to shape future developments in AI law, ethics, and institutional design.