Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs (2311.09469v1)

Published 16 Nov 2023 in cs.CL

Abstract: Resolving ambiguities through interaction is a haLLMark of natural language, and modeling this behavior is a core challenge in crafting AI assistants. In this work, we study such behavior in LMs by proposing a task-agnostic framework for resolving ambiguity by asking users clarifying questions. Our framework breaks down this objective into three subtasks: (1) determining when clarification is needed, (2) determining what clarifying question to ask, and (3) responding accurately with the new information gathered through clarification. We evaluate systems across three NLP applications: question answering, machine translation and natural language inference. For the first subtask, we present a novel uncertainty estimation approach, intent-sim, that determines the utility of querying for clarification by estimating the entropy over user intents. Our method consistently outperforms existing uncertainty estimation approaches at identifying predictions that will benefit from clarification. When only allowed to ask for clarification on 10% of examples, our system is able to double the performance gains over randomly selecting examples to clarify. Furthermore, we find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs. Together, our work lays foundation for studying clarifying interactions with LMs.

The paper "Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs" explores the challenge of handling ambiguity in natural language via interactions with LLMs (LMs). The authors propose a framework designed to address this challenge by enabling LMs to ask clarifying questions, thereby improving interaction quality in applications like question answering, machine translation, and natural language inference.

Their framework consists of three key subtasks:

  1. Determining When Clarification is Needed: The authors introduce a novel uncertainty estimation technique called intent-sim. This method assesses whether user input requires clarification by estimating entropy over user intents, effectively predicting which interactions would benefit most from additional information. Their approach surpasses current uncertainty estimation methods, showing enhanced effectiveness in discerning when clarification should be sought.
  2. Determining What Clarifying Question to Ask: The framework strategically formulates clarifying questions to resolve ambiguities identified in the first subtask. Although specific strategies for this are less detailed, the aim is to craft questions that most efficiently elicit the needed information.
  3. Responding Accurately with New Information: After obtaining clarifications, the system processes this information to improve its primary task performance, ensuring responses incorporate the newly gathered insights.

The paper evaluates their approach within three NLP applications, demonstrating that limiting clarifying interactions to only 10% of ambiguous cases can lead to significant performance improvements, doubling potential gains compared to randomly selected clarifications.

Moreover, the authors note that the intent-sim method displays robustness and consistent improvement across different tasks and LMs, suggesting its wide applicability. This work provides an important step forward in understanding and implementing interactive clarity mechanisms in LMs, paving the way for more nuanced and effective AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Michael J. Q. Zhang (12 papers)
  2. Eunsol Choi (76 papers)
Citations (12)