Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probing the Creativity of Large Language Models: Can models produce divergent semantic association? (2310.11158v1)

Published 17 Oct 2023 in cs.CL and cs.LG

Abstract: LLMs possess remarkable capacity for processing language, but it remains unclear whether these models can further generate creative content. The present study aims to investigate the creative thinking of LLMs through a cognitive perspective. We utilize the divergent association task (DAT), an objective measurement of creativity that asks models to generate unrelated words and calculates the semantic distance between them. We compare the results across different models and decoding strategies. Our findings indicate that: (1) When using the greedy search strategy, GPT-4 outperforms 96% of humans, while GPT-3.5-turbo exceeds the average human level. (2) Stochastic sampling and temperature scaling are effective to obtain higher DAT scores for models except GPT-4, but face a trade-off between creativity and stability. These results imply that advanced LLMs have divergent semantic associations, which is a fundamental process underlying creativity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Honghua Chen (25 papers)
  2. Nai Ding (15 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com