Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams (2505.01648v1)

Published 3 May 2025 in cs.HC

Abstract: Understanding the dynamics of human-AI interaction in question answering is crucial for enhancing collaborative efficiency. Extending from our initial formative study, which revealed challenges in human utilization of conversational AI support, we designed two configurations for prompt guidance: a Nudging approach, where the AI suggests potential responses for human agents, and a Highlight strategy, emphasizing crucial parts of reference documents to aid human responses. Through two controlled experiments, the first involving 31 participants and the second involving 106 participants, we compared these configurations against traditional human-only approaches, both with and without AI assistance. Our findings suggest that effective human-AI collaboration can enhance response quality, though merely combining human and AI efforts does not ensure improved outcomes. In particular, the Nudging configuration was shown to help improve the quality of the output when compared to AI alone. This paper delves into the development of these prompt guidance paradigms, offering insights for refining human-AI collaborations in conversational question-answering contexts and contributing to a broader understanding of human perceptions and expectations in AI partnerships.

Summary

Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams

The paper "Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams" investigates the dynamics of human-AI collaboration in question-answering tasks, emphasizing the importance of effective interaction configurations to optimize outcomes within these partnerships. Conducted by Song et al., this paper explores two specific interaction settings: Nudging and Highlight, aiming to enhance collaborative efficiency between humans and AI agents.

Methodology and Experimental Design

The research builds upon an initial formative paper, identifying challenges in utilizing conversational AI for support. Based on these findings, the authors developed two distinct configurations for prompt guidance: Nudging, which offers AI-suggested potential responses, and Highlight, which emphasizes key parts of reference documents to assist human responses. The paper consists of two controlled experiments:

  1. Study 1 involved 31 participants constructing responses to questions based on predetermined configurations, including Nudging and Highlight, as well as traditional human-only methodologies and human-AI interactions without additional guidance.
  2. Study 2 required 106 raters to evaluate the quality of responses generated in Study 1, incorporating responses from an AI-only condition using GPT-4.

Results and Analysis

Significantly, the Nudging configuration improved response quality compared to AI alone, whereas mere collaboration between humans and AI did not guarantee better outcomes. Qualitative and quantitative analyses revealed several insights:

  • Successful Collaboration: Effective collaboration was not automatic; the Nudging strategy demonstrated a statistically significant improvement in interaction success, evidenced by participants' increased reliance on response-building prompts.
  • Factors Influencing Success: The successful responses correlated with users' active engagement and strategic interaction, including meta-level questioning and paraphrasing tasks, indicating the need for guided interaction pathways.
  • Preference and Perception: Despite biases towards human-generated content, raters qualitatively perceived human-AI collaboration responses as preferable, highlighting a complex interplay between subjective biases and objective response quality.

Design Recommendations and Future Implications

For enhancing human-AI collaboration, the paper recommends:

  1. Query Shortcuts: Effective guidance can elevate user interactions with AI, aligning task prompts with areas where AI excels to foster constructive engagement.
  2. Meta Prompting: Encouraging users to inquire about AI capabilities can efficiently navigate potential interactions, particularly beneficial for those unfamiliar with AI tools.
  3. Shared Vocabulary: Ensuring consistent vocabulary between users and AI reduces interpretation failures, emphasizing the importance of shared lexicons in AI-assisted tasks.

Conclusion

This research provides crucial insights into the factors that optimize human-AI collaboration in question-answering scenarios. By dissecting the effectiveness of prompt guidance configurations, the paper contributes to the broader understanding of human-AI interaction dynamics. The findings serve as groundwork for further exploration into adaptive interaction designs that improve user experiences and task outcomes in AI-assisted contexts. Future developments could explore more nuanced configurations, adaptive strategies, and robust methods to address shared vocabulary and biases in human-AI collaborative settings.