Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GPT-ology, Computational Models, Silicon Sampling: How should we think about LLMs in Cognitive Science? (2406.09464v1)

Published 13 Jun 2024 in cs.AI

Abstract: LLMs have taken the cognitive science world by storm. It is perhaps timely now to take stock of the various research paradigms that have been used to make scientific inferences about cognition" in these models or about human cognition. We review several emerging research paradigms -- GPT-ology, LLMs-as-computational-models, andsilicon sampling" -- and review papers that have used LLMs under these paradigms. In doing so, we discuss their claims as well as challenges to scientific inference under these various paradigms. We highlight several outstanding issues about LLMs that have to be addressed to push our science forward: closed-source vs open-sourced models; (the lack of visibility of) training data; and reproducibility in LLM research, including forming conventions on new task ``hyperparameters" like instructions and prompts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Desmond C. Ong (26 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com