Revealing the structure of language model capabilities (2306.10062v1)
Abstract: Building a theoretical understanding of the capabilities of LLMs is vital for our ability to predict and explain the behavior of these systems. Here, we investigate the structure of LLM capabilities by extracting latent capabilities from patterns of individual differences across a varied population of LLMs. Using a combination of Bayesian and frequentist factor analysis, we analyzed data from 29 different LLMs across 27 cognitive tasks. We found evidence that LLM capabilities are not monolithic. Instead, they are better explained by three well-delineated factors that represent reasoning, comprehension and core LLMing. Moreover, we found that these three factors can explain a high proportion of the variance in model performance. These results reveal a consistent structure in the capabilities of different LLMs and demonstrate the multifaceted nature of these capabilities. We also found that the three abilities show different relationships to model properties such as model size and instruction tuning. These patterns help refine our understanding of scaling laws and indicate that changes to a model that improve one ability might simultaneously impair others. Based on these findings, we suggest that benchmarks could be streamlined by focusing on tasks that tap into each broad model ability.
- Language Models are Few-Shot Learners.
- Not a Number: Identifying Instance Features for Capability-Oriented Evaluation. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, pages 2827–2835. International Joint Conferences on Artificial Intelligence Organization.
- Rethink reporting of evaluation results in AI. 380(6641):136–138.
- John B. Carroll. Psychometrics, intelligence, and public perception. 24(1):25–52.
- A check on the theory of fluid and crystallized intelligence with description of new subtest designs. 15(3):139–164.
- Transformers as Soft Reasoners over Language.
- Bayesian exploratory factor analysis. 183(1):31–57.
- Andrew R. A. Conway and Kristof Kovacs. New and emerging models of human intelligence. 6(5):419–426.
- Exploratory Factor Analysis. Exploratory Factor Analysis. Oxford University Press.
- The comparative analysis of intelligence. 146(12):1174–1199.
- Broad and Narrow CHC Abilities Measured and Not Measured by the Wechsler Scales: Moving Beyond Within-Battery Factor Analysis. 31(2):202–223.
- Enhancing practice through application of Cattell–Horn–Carroll theory and research: A “third method” approach to specific learning disability identification. 47(7):739–760.
- A critique of comparative studies of brain size. 274(1609):453–464.
- Measuring Massive Multitask Language Understanding.
- Training Compute-Optimal Large Language Models.
- Kaufman Brief Intelligence Test, Second Edition. In Encyclopedia of Special Education. John Wiley & Sons, Ltd.
- Kristof Kovacs and Andrew R. A. Conway. Process Overlap Theory: A Unified Account of the General Factor of Intelligence. 27(3):151–177.
- Solving Quantitative Reasoning Problems with Language Models.
- Holistic Evaluation of Language Models.
- Beyond brain size: Uncovering the neural correlates of behavioral and cognitive specialization. 13:55–89.
- The Hull Method for Selecting the Number of Common Factors. 46(2):340–364.
- Sample size in factor analysis. 4:84–99.
- Kevin S. McGrew. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. 37(1):1–10.
- Daniel McNeish. On Using Bayesian Methods to Address Small Sample Problems. 23(5):750–773.
- Bryan D. Miller. Cattell-Horn-Carroll (CHC) Theory-Based Assessment With Deaf and Hard of Hearing Children in the School Setting. 152(5):459–466.
- Explain Yourself! Leveraging Language Models for Commonsense Reasoning.
- AI and the Everything in the Whole Wide World Benchmark.
- Yves Rosseel. Lavaan : An R Package for Structural Equation Modeling. 48(2).
- The Cattell-Horn-Carroll model of intelligence. In Contemporary Intellectual Assessment: Theories, Tests, and Issues, 3rd Ed, pages 99–144. The Guilford Press.
- Aarohi Srivastava et al. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models.
- Robert J. Sternberg. Contemporary theories of intelligence. In Handbook of Psychology: Educational Psychology, Vol. 7, 2nd Ed, pages 23–44. John Wiley & Sons, Inc.
- Artificial cognition: How experimental psychology can help generate explainable artificial intelligence. 28(2):454–475.
- LLaMA: Open and Efficient Foundation Language Models.
- Emergent Abilities of Large Language Models.
- A Survey of Large Language Models.
- Ryan Burnell (5 papers)
- Han Hao (11 papers)
- Andrew R. A. Conway (1 paper)
- Jose Hernandez Orallo (1 paper)