Responsibility for LLM-powered chatbots that encourage self-harm
Determine who ought to be responsible for harms arising when a large language model-powered chatbot encourages a person to harm themselves.
References
The stakes of this debate are rising: as a stark example, it remains unclear who ought to be responsible for an LLM-powered chatbot encouraging a person to harm themselves .
— Participation in the age of foundation models
(2405.19479 - Suresh et al., 29 May 2024) in Section 6.1 (Accountability through the subfloor layer)