Emerging Reliance Behaviors in Human-AI Content Grounded Data Generation: The Role of Cognitive Forcing Functions and Hallucinations (2409.08937v2)
Abstract: We investigate the impact of hallucinations and Cognitive Forcing Functions in human-AI collaborative content-grounded data generation, focusing on the use of LLMs to assist in generating high quality conversational data. Through a study with 34 users who each completed 8 tasks (n=272), we found that hallucinations significantly reduce data quality. While Cognitive Forcing Functions do not always alleviate these effects, their presence influences how users integrate AI responses. Specifically, we observed emerging reliance behaviors, with users often appending AI-generated responses to their correct answers, even when the AI's suggestions conflicted. This points to a potential drawback of Cognitive Forcing Functions, particularly when AI suggestions are inaccurate. Users who overrelied on AI-generated text produced lower quality data, emphasizing the nuanced dynamics of overreliance in human-LLM collaboration compared to traditional human-AI decision-making.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.