- The paper’s main contribution is proposing three principles for postmortem data management in Gen-AI, including data deletion, inheritance, and harm prevention.
- It systematically analyzes current privacy laws and industry practices, revealing significant gaps in protecting the data of deceased individuals.
- The study outlines technical solutions such as machine unlearning and digital wills to ensure ethical handling and safeguard legacy rights.
Postmortem Data Management Principles for Generative AI
Introduction
The proliferation of generative AI (Gen-AI) systems, including LLMs and agentic platforms, has led to extensive use of user-generated data for model training and deployment. While privacy, copyright, and data ownership concerns for living users have been widely discussed, the management of data belonging to deceased individuals remains underexplored. This paper systematically analyzes the regulatory, industry, and research landscape regarding postmortem data management in Gen-AI, identifies critical gaps, and proposes three actionable principles to guide future regulatory and technical solutions.
Figure 1: Overview of the approach: analysis of current regulations and practices, proposal of three principles, and recommendations for regulatory and technological implementation.
Regulatory and Industry Landscape
Regulatory Gaps
Current privacy regulations such as GDPR, CCPA, and LGPD provide mechanisms for living users to control, delete, and manage their personal data. However, these frameworks generally do not extend protections to deceased individuals. The EU AI Act, while imposing transparency and content disclosure requirements for Gen-AI, omits explicit consideration of postmortem data. Similarly, emerging state-level AI regulations (e.g., California’s AI Transparency Act) focus on content labeling and detection, not on the rights or management of data from deceased users.
Industry Practices
Major technology platforms (OpenAI, Anthropic, Meta, Apple, Microsoft, X) offer limited postmortem data management, typically at the account level. Options include memorialization, deletion, or limited data access for designated contacts, but these controls do not extend to data used for AI model training or to the influence of such data on model outputs. AI platforms such as Character.AI and Replika allow creation of chatbots based on real individuals, including the deceased, with minimal safeguards against unauthorized replication or misuse. The lack of standardized, enforceable postmortem data management policies exposes deceased individuals and their families to privacy violations, identity misuse, and reputational harm.
Analytical Insights and Risks
Gen-AI systems can replicate the language, personality, and creative style of deceased individuals, leading to phenomena such as “deadbots” and “griefbots.” These agents, constructed from social media and other digital traces, may provide comfort but also risk psychological distress, unwanted contact, and commodification of digital identities. The persistent digital footprint of deceased users can be exploited for misinformation, unauthorized commercial use, and deepfake generation, with little recourse for survivors.
The paper highlights that current privacy laws and industry practices do not provide explicit or sufficient protections for postmortem data rights, leaving significant ethical and legal gaps.
Proposed Principles for Postmortem Data Management
Based on the regulatory and industry analysis, the paper introduces three principles to guide the management of deceased individuals’ data in Gen-AI systems:
- Right to be Forgotten or Data Deletion: Mechanisms should enable deletion of personal data and removal of its influence from AI models upon verified death, preferably initiated by a designated legacy contact. This extends the “right to be forgotten” to postmortem contexts and requires machine unlearning to ensure model outputs are no longer affected by the deceased’s data.
- Data Inheritance and Ownership: Individuals should have the option to transfer data rights to heirs, either for deletion, management, or monetization. Inheritance mechanisms must balance privacy, security, and the wishes of the deceased, potentially using digital wills and cryptographic controls to enforce consent and access.
- Purpose Limits and Harm Prevention: Data donated for research or societal benefit must be governed by explicit agreements specifying permissible uses, transparency, and safeguards against harm to the deceased’s legacy and surviving relatives. Prohibitions should include targeted advertising, political persuasion, and unauthorized voice/image cloning.
Implementation Strategies
Regulatory Recommendations
- Mandate disclosure of postmortem data management policies in AI agent privacy statements.
- Standardize processes for heirs to request data deletion, with strict timelines (e.g., 30 days) and compliance reporting.
- Establish digital will mechanisms to allow individuals to specify postmortem data management preferences.
- Prohibit harmful uses of postmortem data, with oversight by institutional review boards or human oversight committees.
Technical Solutions
- Machine Unlearning: Extend data deletion to model unlearning, ensuring that the deceased’s data is purged from model weights and outputs. Employ third-party audits (e.g., MUSE) to verify compliance.
- Digital Wills and Cryptographic Controls: Implement attribute-based encryption and trusted third-party administration for data inheritance and monetization.
- Privacy and Safety by Design: Apply data minimization, anonymization, differential privacy, watermarking, and canary injection to postmortem datasets. Integrate content classification and prompt/response monitoring tools (e.g., Llama Guard) to detect and mitigate harmful outputs.
Limitations and Future Directions
The analysis is limited to select AI agents, LLMs, and privacy laws, and does not empirically evaluate technical enforcement of the proposed principles. Future work should include:
- Auditing Gen-AI systems for sensitive data memorization and effectiveness of unlearning methods.
- Measuring the potential for harm from Gen-AI using postmortem data.
- Developing efficient, scalable methods for deploying postmortem data management principles across diverse platforms and jurisdictions.
Conclusion
This paper provides a comprehensive framework for postmortem data management in generative AI, identifying critical regulatory and technical gaps and proposing actionable principles for protecting the rights and dignity of deceased individuals. The recommendations for regulatory reform and technical implementation offer a path toward more ethical and secure handling of postmortem data in AI systems. As Gen-AI continues to evolve, robust postmortem data management will be essential to mitigate privacy risks, prevent harm, and respect the wishes of both the deceased and their survivors.