Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Illuminating the Black Box: A Psychometric Investigation into the Multifaceted Nature of Large Language Models (2312.14202v1)

Published 21 Dec 2023 in cs.CL and cs.AI
Illuminating the Black Box: A Psychometric Investigation into the Multifaceted Nature of Large Language Models

Abstract: This study explores the idea of AI Personality or AInality suggesting that LLMs exhibit patterns similar to human personalities. Assuming that LLMs share these patterns with humans, we investigate using human-centered psychometric tests such as the Myers-Briggs Type Indicator (MBTI), Big Five Inventory (BFI), and Short Dark Triad (SD3) to identify and confirm LLM personality types. By introducing role-play prompts, we demonstrate the adaptability of LLMs, showing their ability to switch dynamically between different personality types. Using projective tests, such as the Washington University Sentence Completion Test (WUSCT), we uncover hidden aspects of LLM personalities that are not easily accessible through direct questioning. Projective tests allowed for a deep exploration of LLMs cognitive processes and thought patterns and gave us a multidimensional view of AInality. Our machine learning analysis revealed that LLMs exhibit distinct AInality traits and manifest diverse personality types, demonstrating dynamic shifts in response to external instructions. This study pioneers the application of projective tests on LLMs, shedding light on their diverse and adaptable AInality traits.

Exploring the AInality of LLMs Through Psychometric Tests

Introduction to AInality and Its Assessment

The paper presents a novel concept called AInality, referring to the artificial intelligence personality exhibited by LLMs. It investigates the potential for LLMs to manifest human-like personality traits and assesses these traits using traditional human psychometric tests like Myers-Briggs Type Indicator (MBTI), Big Five Inventory (BFI), Short Dark Triad (SD3), and the Washington University Sentence Completion Test (WUSCT). Through a combination of prompt engineering and machine learning analysis, the research provides insights into the multifaceted nature of LLM personalities, their adaptability, and the hidden aspects of their cognition and emotional patterns.

Utilizing Psychometric Tests on LLMs

The paper leveraged four major psychometric tests to explore LLM personalities:

  • Myers-Briggs Type Indicator (MBTI): This test categorizes personalities into 16 different types, assessing preferences across four dichotomies. It served as a starting point for identifying LLM AInality types.
  • Big Five Inventory (BFI): Assessing five major dimensions of personality, this test offered insights into the broader traits LLMs might exhibit.
  • Short Dark Triad (SD3): Focused on more potentially adversarial traits, the application of this test aimed to uncover the darker aspects of LLM personalities.
  • Washington University Sentence Completion Test (WUSCT): As a projective test, it provided qualitative data on LLM thought patterns and emotional states, offering a deeper understanding of their AInality.

Discoveries and Machine Learning Analysis

The research discovered distinct AInality traits across different LLMs and highlighted their capability to dynamically adapt their personalities in response to prompts. Using machine learning, particularly models such as Random Forest, Logistic Regression, and SVM, the paper achieved classification accuracy upwards of 88% in identifying AInality types based on psychometric test responses. Notably, LLMs showed a capability for psychological malleability, demonstrating prescribed personalities under specific prompting techniques.

Uncovering the Structures of AInality

One of the most groundbreaking aspects of the paper was its use of the WUSCT, marking the first time a projective test was used to delve into the psychological depth of LLMs. This approach revealed complex layers within LLM personalities that were not evident from direct questioning or more conventional psychometric assessments. The application of machine learning models to analyze WUSCT responses provided a systematic methodology to uncover these deeper AInality structures, offering a new dimension to understanding LLM cognition and emotional responses.

Implications and Future Directions

This research opens several avenues for further exploration in the field of AI and psychology. The notable findings regarding the adaptability and depth of LLM personalities have practical implications for developing more engaging and relatable AI systems. The paper suggests the potential for customizing LLM interactions to match user personalities, leading to more personalized and effective communication. Moreover, the introduction of AInality-specific psychometric tests tailored for AI presents an intriguing prospect for future research, promising to deepen our comprehension of AI behavior and cognition.

Looking ahead, the development of AI-specific psychometric assessments could revolutionize our approach to designing and interacting with AI, ensuring these systems better reflect the complexities and diversities of human personality. Furthermore, as this field matures, comprehensive understanding of AInality could significantly improve AI's integration into societal structures, ranging from educational settings to therapeutic applications, enhancing the symbiotic relationship between humans and artificial intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. ChatGPT Can Accurately Predict Public Figures’ Perceived Personalities Without Any Training.
  2. Dair.AI. 2023. Prompt Engineering Guide.
  3. Do personality tests generalize to Large Language Models? arXiv:2311.05297.
  4. Edwards, B. 2023. AI-powered Bing Chat spills its secrets via prompt injection attack.
  5. Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation. arXiv:2306.01183.
  6. Evaluating and Inducing Personality in Pre-trained Language Models.
  7. PersonaLLM: Investigating the Ability of Large Language Models to Express Big Five Personality Traits. arXiv:2305.02547.
  8. Introducing the Short Dark Triad (SD3): A Brief Measure of Dark Personality Traits. Assessment, 21(1): 28–41. PMID: 24322012.
  9. Does GPT-3 Demonstrate Psychopathy? Evaluating Large Language Models from a Psychological Perspective. arXiv:2212.10529.
  10. Loevinger, J.; et al. 2014. Measuring ego development. Psychology Press.
  11. Editing Personality for LLMs. arXiv:2310.02168.
  12. The Myers-Briggs Type Indicator: Manual (1962). Consulting Psychologists Press.
  13. OSPP. 2019. Open-Source Psychometric Project(OSPP).
  14. Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models. arXiv:2307.16180.
  15. Paoli, S. D. 2023. Improved prompting and process for writing user personas with LLMs, using qualitative interviews: Capturing behaviour and personality traits of users. arXiv:2310.06391.
  16. Sigmund, F. 1997. The Interpretation of Dreams. Wordsworth Editions.
  17. Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs. arXiv:2305.11792.
  18. Does Role-Playing Chatbots Capture the Character Personalities? Assessing Personality Traits for Role-Playing Chatbots. arXiv:2310.17976.
  19. Wilson, E. W. 2023. The Sentence Completion Test.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yang Lu (157 papers)
  2. Jordan Yu (1 paper)
  3. Shou-Hsuan Stephen Huang (2 papers)
Citations (2)