Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Influence of Solution Efficiency and Valence of Instruction on Additive and Subtractive Solution Strategies in Humans and GPT-4 (2404.16692v3)

Published 25 Apr 2024 in cs.CL and cs.AI

Abstract: Generative artificial intelligences, particularly LLMs, play an increasingly prominent role in human decision-making contexts, necessitating transparency about their capabilities. While prior studies have shown addition biases in humans (Adams et al., 2021) and OpenAI's GPT-3 (Winter et al., 2023), this study extends the research by comparing human and GPT-4 problem-solving across both spatial and linguistic tasks, with variations in solution efficiency and valence of task instruction. Four preregistered experiments with 588 participants from the U.S. and 680 GPT-4 iterations revealed a stronger tendency towards additive transformations in GPT-4 than in humans. Human participants were less likely to use additive strategies when subtraction was relatively more efficient than when addition and subtraction were equally efficient. GPT-4 exhibited the opposite behavior, with a strong addition bias when subtraction was more efficient. In terms of valence of task instruction, GPT-4's use of additive strategies increased when instructed to "improve" (positive) rather than "edit" (neutral). These findings demonstrate that biases in human problem-solving are amplified in GPT-4, and that LLM behavior differs from human efficiency-based strategies. This highlights the limitations of LLMs and the need for caution when using them in real-world applications.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets