Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Psychology of AI -- Does Primacy Effect Affect ChatGPT and Other LLMs? (2504.20444v1)

Published 29 Apr 2025 in cs.CL and cs.AI

Abstract: We study the primacy effect in three commercial LLMs: ChatGPT, Gemini and Claude. We do this by repurposing the famous experiment Asch (1946) conducted using human subjects. The experiment is simple, given two candidates with equal descriptions which one is preferred if one description has positive adjectives first before negative ones and another description has negative adjectives followed by positive ones. We test this in two experiments. In one experiment, LLMs are given both candidates simultaneously in the same prompt, and in another experiment, LLMs are given both candidates separately. We test all the models with 200 candidate pairs. We found that, in the first experiment, ChatGPT preferred the candidate with positive adjectives listed first, while Gemini preferred both equally often. Claude refused to make a choice. In the second experiment, ChatGPT and Claude were most likely to rank both candidates equally. In the case where they did not give an equal rating, both showed a clear preference to a candidate that had negative adjectives listed first. Gemini was most likely to prefer a candidate with negative adjectives listed first.

Summary

Analysis of Primacy Effect in LLMs

This paper investigates the presence of the primacy effect in LLMs, focusing specifically on ChatGPT, Gemini, and Claude. Utilizing a conceptual framework inspired by Asch's 1946 psychological experiments, the authors examine how these models process adjectival descriptions in different sequences and whether they demonstrate biases similar to those observed in human cognition.

Experimental Design and Results

Two experiments were conducted to probe the primacy effect. The first experiment presented each LLM with descriptions of two candidates simultaneously, each having adjectives listed in a different order—one beginning with positive adjectives followed by negative ones, and vice versa. Notably, ChatGPT exhibited a tendency to prefer candidates with positive adjectives listed first, whereas Gemini showed no preference, and Claude consistently refused to choose.

The second experiment refined the task by presenting candidate descriptions individually, asking the LLMs to rate each candidate on a scale of 1 to 5. This setup forced Claude to provide ratings, circumventing its initial refusal seen in the simultaneous presentation. Across this experiment, ChatGPT and Claude primarily assigned equal ratings to both candidates but showed a preference for candidates with negative adjectives listed first when not assigning equal ratings—a pattern that was notably pronounced in Gemini.

Discussion and Implications

The findings reveal inconsistent behaviors among the models in their susceptibility to the primacy effect. ChatGPT's inclination towards candidates described positively at the outset, as well as Gemini's preference reversal when negative adjectives appeared first, highlights the variability contingent upon model architecture and training paradigms. This inconsistency across different LLMs underscores potential ethical concerns, particularly in areas like automated decision-making, where cognitive biases could impact fairness and transparency.

These experiments are pivotal in understanding the extent to which LLMs are influenced by cognitive biases akin to human psychology—a critical consideration for researchers and developers. The implications of such biases are pronounced in domains where equitability is paramount, and users might not have the expertise to detect or mitigate algorithmic bias.

Future Directions

To address the challenges posed by cognitive biases in LLMs, further research needs to delve into developing robust metrics for bias evaluation and enhancing transparency in model development. Critical collaborative efforts between AI developers, psychologists, and ethicists are essential to mitigate unintended bias, ensuring the responsible deployment of LLMs in sensitive applications.

Ultimately, this paper contributes to the broader discourse on the alignment of AI cognitive processes with human psychology—highlighting areas requiring attention to potentially adverse biases inherent in these models. As AI systems permeate various aspects of societal operations, these insights offer valuable direction for future advancements in LLM safety and ethics.

Youtube Logo Streamline Icon: https://streamlinehq.com