Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

LLMs Simulate Big Five Personality Traits: Further Evidence (2402.01765v1)

Published 31 Jan 2024 in cs.CL and cs.AI

Abstract: An empirical investigation into the simulation of the Big Five personality traits by LLMs, namely Llama2, GPT4, and Mixtral, is presented. We analyze the personality traits simulated by these models and their stability. This contributes to the broader understanding of the capabilities of LLMs to simulate personality traits and the respective implications for personalized human-computer interaction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. When large language models meet personalization: Perspectives of challenges and opportunities. Arxiv preprint: arXiv:2307.16376.
  2. The convergent validity between self and observer ratings of personality: A meta-analytic review. International Journal of Selection and Assessment, 15(1):110–117.
  3. David C. Funder and C. Randall Colvin. 1997. Congruence of others’ and self-judgments of personality. In Handbook of Personality Psychology, pages 617–647. Elsevier.
  4. GenAI and Meta. 2023. Llama 2: Open foundation and fine-tuned chat models. Arxiv preprint: arXiv:2307.09288v2.
  5. Accuracy of judging others’ traits and states: Comparing mean levels across tests. Journal of Research in Personality, 42(6):1476–1489.
  6. Evaluating and inducing personality in pre-trained language models. Arxiv preprint: arXiv:2206.07550v3.
  7. P.J. Kajonius and J.A. Johnson. 2019. Assessing the structure of the five factor model of personality (IPIP-NEO-120) in the public domain. Europe’s Journal of Psychology, 15(2).
  8. Teach LLMs to personalize – an approach inspired by writing education. Arxiv preprint: arXiv:2308.07968.
  9. LLM-Rec: Personalized recommendation via prompting large language models. Arxiv preprint: arXiv:2307.15780.
  10. A test of the international personality item pool representation of the revised NEO personality inventory and development of a 120-item IPIP-based measure of the five-factor model. Psychological Assessment, 26(4).
  11. Personality Traits. Cambridge University Press.
  12. Robert McCrae. 2010. The place of the FFM in personality psychology. Psychological Inquiry, 21(1):57–64.
  13. Consensual validation of personality traits across cultures. Journal of Research in Personality, 38(2):179–201.
  14. Mistral AI. 2023. Mistral 7B. Arxiv preprint: arXiv:2310.06825.
  15. Open AI. 2023. GPT-4 technical report. Arxiv preprint: arXiv:2303.08774v3.
  16. LaMP: When large language models meet personalization. Arxiv preprint: arXiv:2304.11406.
  17. Personality traits in large language models. Arxiv preprint: arXiv:2307.00184.
  18. Memory-augmented LLM personalization with short-and long-term memory coordination. Arxiv preprint: arXiv:2309.11696.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 2 likes.