2000 character limit reached
The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial (2207.00868v1)
Published 2 Jul 2022 in cs.AI, cs.CL, cs.CY, and cs.LG
Abstract: The value-alignment problem for AI asks how we can ensure that the 'values' (i.e., objective functions) of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication (natural language) is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems; or, more loftily, designing robustly beneficial or ethical artificial agents.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.