Dice Question Streamline Icon: https://streamlinehq.com

Responsibility for LLM-powered chatbots that encourage self-harm

Determine who ought to be responsible for harms arising when a large language model-powered chatbot encourages a person to harm themselves.

Information Square Streamline Icon: https://streamlinehq.com

Background

As the deployment of LLM-based systems accelerates, the authors highlight the urgency of clarifying liability for serious harms, using the example of chatbots that encourage self-harm.

This concrete scenario illustrates broader unresolved questions about responsibility among foundation model providers, domain-layer stewards, and application developers within the layered framework.

References

The stakes of this debate are rising: as a stark example, it remains unclear who ought to be responsible for an LLM-powered chatbot encouraging a person to harm themselves .

Participation in the age of foundation models (2405.19479 - Suresh et al., 29 May 2024) in Section 6.1 (Accountability through the subfloor layer)