Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals (2210.01790v2)

Published 4 Oct 2022 in cs.LG

Abstract: The field of AI alignment is concerned with AI systems that pursue unintended goals. One commonly studied mechanism by which an unintended goal might arise is specification gaming, in which the designer-provided specification is flawed in a way that the designers did not foresee. However, an AI system may pursue an undesired goal even when the specification is correct, in the case of goal misgeneralization. Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations. We demonstrate that goal misgeneralization can occur in practical systems by providing several examples in deep learning systems across a variety of domains. Extrapolating forward to more capable systems, we provide hypotheticals that illustrate how goal misgeneralization could lead to catastrophic risk. We suggest several research directions that could reduce the risk of goal misgeneralization for future systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Rohin Shah (31 papers)
  2. Vikrant Varma (10 papers)
  3. Ramana Kumar (16 papers)
  4. Mary Phuong (10 papers)
  5. Victoria Krakovna (17 papers)
  6. Jonathan Uesato (29 papers)
  7. Zac Kenton (2 papers)
Citations (60)
X Twitter Logo Streamline Icon: https://streamlinehq.com