2000 character limit reached
On the Ethics of Building AI in a Responsible Manner (2004.04644v1)
Published 30 Mar 2020 in cs.LG
Abstract: The AI-alignment problem arises when there is a discrepancy between the goals that a human designer specifies to an AI learner and a potential catastrophic outcome that does not reflect what the human designer really wants. We argue that a formalism of AI alignment that does not distinguish between strategic and agnostic misalignments is not useful, as it deems all technology as un-safe. We propose a definition of a strategic-AI-alignment and prove that most machine learning algorithms that are being used in practice today do not suffer from the strategic-AI-alignment problem. However, without being careful, today's technology might lead to strategic misalignment.
- Shai Shalev-Shwartz (67 papers)
- Shaked Shammah (6 papers)
- Amnon Shashua (44 papers)