Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactive AI Alignment: Specification, Process, and Evaluation Alignment (2311.00710v2)

Published 23 Oct 2023 in cs.HC and cs.AI

Abstract: Modern AI enables a high-level, declarative form of interaction: Users describe the intended outcome they wish an AI to produce, but do not actually create the outcome themselves. In contrast, in traditional user interfaces, users invoke specific operations to create the desired outcome. This paper revisits the basic input-output interaction cycle in light of this declarative style of interaction, and connects concepts in AI alignment to define three objectives for interactive alignment of AI: specification alignment (aligning on what to do), process alignment (aligning on how to do it), and evaluation alignment (assisting users in verifying and understanding what was produced). Using existing systems as examples, we show how these user-centered views of AI alignment can be used descriptively, prescriptively, and as an evaluative aid.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Michael Terry (25 papers)
  2. Chinmay Kulkarni (15 papers)
  3. Martin Wattenberg (39 papers)
  4. Lucas Dixon (41 papers)
  5. Meredith Ringel Morris (29 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.