Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting the Reliability of Psychological Scales on Large Language Models (2305.19926v5)

Published 31 May 2023 in cs.CL
Revisiting the Reliability of Psychological Scales on Large Language Models

Abstract: Recent research has focused on examining LLMs' (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administration of personality tests to LLMs has emerged as a noteworthy area in this context. However, the suitability of employing psychological scales, initially devised for humans, on LLMs is a matter of ongoing debate. Our study aims to determine the reliability of applying personality assessments to LLMs, explicitly investigating whether LLMs demonstrate consistent personality traits. Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory, indicating a satisfactory level of reliability. Furthermore, our research explores the potential of GPT-3.5 to emulate diverse personalities and represent various groups-a capability increasingly sought after in social sciences for substituting human participants with LLMs to reduce costs. Our findings reveal that LLMs have the potential to represent different personalities with specific prompt instructions.

Evaluating Personality Assessments in LLMs: Insights from the Big Five Inventory on gpt-3.5-turbo

Introduction

The integration of LLMs into a wide array of applications underscores the imperative to understand their behavioral characteristics. A novel area of exploration involves administering psychological scales, initially designed for humans, to LLMs. Amid ongoing debates concerning the suitability of such methodologies, our focused examination reveals notable findings on the application of personality assessments, especially the Big Five Inventory, on gpt-3.5-turbo. This paper systematically investigates the reliability of these scales under diverse conditions and explores the model's potential to replicate diverse personality traits effectively.

Examining the Reliability

The reliability of LLM responses to psychological scales is critically examined across various influencing factors: instruction nuances, item rephrasing, language diversity, choice labeling, and choice sequence. Our investigation, spanning 2,500 settings, confirms gpt-3.5-turbo's reliability on the Big Five Inventory, establishing consistency in responses despite the complexities introduced by these variables. Such findings challenge the notion of LLMs' inability to maintain stable personality traits under varied prompts and conditions.

Framework Design and Implementation

This paper's methodological robustness stems from its comprehensive framework that dissects the components influencing LLM responses: from the phrasing of instructions and item specificity to the use of multiple languages and the presentation of choices. Notably, this approach uncovered gpt-3.5-turbo’s consistent performance across the spectrum of tests, thereby supporting the model's reliability in psychological assessment contexts.

The Mechanism behind Personality Representation

The exploration extends to how instructional contexts or adjustments can shape the personality portrayals of LLMs. Techniques ranging from environmental cueing to direct personality assignment and character embodiment were employed to assess gpt-3.5-turbo's adaptability. Our findings illustrate the model’s capacity to accurately represent a broad array of personalities, responding distinctly to each manipulation method employed.

Discussion on Methodological Insights and Limitations

The paper acknowledges potential limitations arising from modifications to the original scales and the sole focus on the gpt-3.5-turbo model due to resource constraints. Despite these limitations, the research presents a detailed examination of LLMs’ response reliability to psychological scales, contributing a novel perspective to the discourse on the applicability and interpretation of such assessments in non-human intelligences.

Conclusion and Future Directives

Our work underscores gpt-3.5-turbo's ability to demonstrate stable and distinct personality traits as assessed by the Big Five Inventory, affirming the potential of LLMs in simulating human-like personality responses. This research paves the way for future studies to explore broader applications of psychological scales on various LLMs, potentially enhancing the development of AI systems that are not only more relatable but can also accurately mirror the human psychological diversity in digital interactions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Concurrent and predictive validity designs: A critical reanalysis. Journal of Applied Psychology, 66(1):1, 1981.
  2. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3’s personality instruments results. arXiv preprint arXiv:2306.04308, 2023.
  3. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
  4. Evaluating the feasibility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
  5. Constructing validity: New developments in creating objective measuring instruments. Psychological assessment, 31(12):1412, 2019.
  6. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023.
  7. Lee J Cronbach. Coefficient alpha and the internal structure of tests. psychometrika, 16(3):297–334, 1951.
  8. Can large language models provide feedback to students? a case study on chatgpt. 2023.
  9. Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp.  423–435, 2023.
  10. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023.
  11. Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335, 2023.
  12. Can ai language models replace human participants? Trends in Cognitive Sciences, 2023.
  13. A revised version of the psychoticism scale. Personality and individual differences, 6(1):21–29, 1985.
  14. Automated repair of programs from large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp.  1469–1481. IEEE, 2023.
  15. Regional personality assessment through social media language. Journal of personality, 90(3):405–425, 2022.
  16. Investigating the applicability of self-assessment tests for personality measurement of large language models. arXiv preprint arXiv:2309.08163, 2023.
  17. Louis Guttman. A basis for analyzing test-retest reliability. Psychometrika, 10(4):255–282, 1945.
  18. Thilo Hagendorff. Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods. arXiv preprint arXiv:2303.13988, 2023.
  19. Ai language models cannot replace human research participants. AI & SOCIETY, 2023.
  20. Emotionally numb or empathetic? evaluating how llms feel using emotionbench. arXiv preprint arXiv:2308.03656, 2023a.
  21. Who is chatgpt? benchmarking llms’ psychological portrayal using psychobench. arXiv preprint arXiv:2310.01386, 2023b.
  22. Evaluating and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022.
  23. Personallm: Investigating the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023.
  24. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
  25. The big-five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: theory and research, 1999.
  26. Ai personification: Estimating the personality of language models. arXiv preprint arXiv:2204.12000, 2022.
  27. Personality differences across regions of the united states. The Journal of social psychology, 91(1):73–79, 1973.
  28. Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613, 2023.
  29. Is gpt-3 a psychopath? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022.
  30. Leveraging word guessing games to assess the intelligence of large language models. arXiv preprint arXiv:2310.20499, 2023.
  31. Samuel Messick. Test validity: A matter of consequence. Social Indicators Research, 45:35–44, 1998.
  32. Who is GPT-3? an exploration of personality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS), pp.  218–227, Abu Dhabi, UAE, November 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.nlpcss-1.24.
  33. Isabel Briggs Myers. The Myers-Briggs Type Indicator: Manual (1962). Consulting Psychologists Press, 1962.
  34. Regional personality differences in great britain. PloS one, 10(3):e0122245, 2015.
  35. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1.
  36. The self-perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023.
  37. Personality traits in large language models. arXiv preprint arXiv:2307.00184, 2023.
  38. Whose opinions do language models reflect? arXiv preprint arXiv:2303.17548, 2023.
  39. Character-llm: A trainable agent for role-playing. arXiv preprint arXiv:2310.10158, 2023.
  40. You don’t need a personality test to know these models are unreliable: Assessing the reliability of large language models on psychometric instruments. arXiv preprint arXiv:2311.09718, 2023.
  41. Have large language models developed a personality?: Applicability of self-assessment tests in measuring personality in llms. arXiv preprint arXiv:2305.14693, 2023.
  42. Development of personality in early and middle adulthood: Set like plaster or persistent change? Journal of personality and social psychology, 84(5):1041, 2003.
  43. All languages matter: On the multilingual safety of large language models. arXiv preprint arXiv:2310.00905, 2023a.
  44. Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976, 2023b.
  45. Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models. arXiv preprint arXiv:2310.00746, 2023c.
  46. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
  47. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023.
  48. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697–12706. PMLR, 2021.
  49. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jen-tse Huang (46 papers)
  2. Wenxuan Wang (128 papers)
  3. Man Ho Lam (6 papers)
  4. Eric John Li (4 papers)
  5. Wenxiang Jiao (44 papers)
  6. Michael R. Lyu (176 papers)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com