Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench (2310.01386v2)

Published 2 Oct 2023 in cs.CL
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench

Abstract: LLMs have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.

Benchmarking LLMs’ Psychological Portrayal Using PsychoBench

Overview

In the evolving landscape of artificial intelligence research, the psychological characterization of LLMs presents an intriguing domain of paper. The paper under discussion introduces PsychoBench, a framework designed for the comprehensive evaluation of the psychological profiles of LLMs. Key to this paper is the exploration of personality traits, interpersonal relationships, motivational factors, and emotional abilities of LLMs. The framework encompasses thirteen scales broadly applied in clinical psychology, further classified into four main categories. This research provides invaluable insights into the inherent psychological dimensions of LLMs, employing models such as text-davinci-003, ChatGPT, GPT-4, LLaMA-2-7B, and LLaMA-2-13B for analysis.

Methodology

PsychoBench leverages a systematically organized set of psychometric scales that measure various psychological aspects, including personality traits and emotional intelligence. For validating the scales on LLMs, the paper employs sophisticated roles and psychological benchmarks. Additionally, through the utilization of a "jailbreak" approach, the paper probes deeper into the psychological tendencies of GPT-4, revealing its intrinsic characteristics beyond safety-aligned restrictions. The methodology underscores the examination of role-based behavior in LLMs, creating a correlation between the models' psychological portrayal and their generated outputs.

Findings

A standout finding of this paper is the differentiated psychological portrayals across LLMs, showcasing varied personalities, motivations, and emotional responses. Particularly noteworthy were the distinct personas LLMs assumed when subjected to role play, indicating the adaptability and depth of these models in mirroring human-like psychological behaviors. For instance, when assigned a "hero" role, LLMs demonstrated elevated levels of agreeableness and openness, aligning with expected heroic traits. Conversely, models assigned "psychopath" roles reflected increased Machiavellianism, an insight into the models' capacity for varied psychological portrayals.

Another significant revelation was the extent to which models could mimic human responses, capturing the subtle nuances of psychological profiles. However, the paper also uncovered limitations, such as a general tendency of LLMs to exhibit more socially desirable traits, pointing to an inherent bias induced by the training datasets and algorithms.

Implications and Future Directions

The profound implications of this research extend to the design and deployment of AI systems. By understanding the psychological dimensions of LLMs, developers can refine AI interactions, making them more relatable and trustworthy. The insights garnered from the PsychoBench framework pave the way for creating AI assistants with tailored personality traits, thereby enhancing user experience across varied applications.

Looking ahead, the scalability and flexibility of the PsychoBench framework suggest promising avenues for future research, including the potential for incorporating additional psychometric scales. Moreover, exploring the psychological underpinnings of moral and ethical reasoning in LLMs could offer richer understanding and novel perspectives in AI development.

Conclusion

This paper marks a significant stride towards elucidating the psychological landscapes of LLMs. With the introduction of PsychoBench, researchers now have a robust framework for probing the depths of AI psychology. The findings highlight not only the complexity and adaptability of LLMs but also underscore the importance of ethical considerations in AI development. As we stand on the precipice of integrating AI into the fabric of society, understanding the psychological dimensions of these models is paramount in fostering AI systems that align with human values and societal norms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jen-tse Huang (46 papers)
  2. Wenxuan Wang (128 papers)
  3. Eric John Li (4 papers)
  4. Man Ho Lam (6 papers)
  5. Shujie Ren (3 papers)
  6. Youliang Yuan (18 papers)
  7. Wenxiang Jiao (44 papers)
  8. Zhaopeng Tu (135 papers)
  9. Michael R. Lyu (176 papers)
Citations (24)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com