FoundaBench: Evaluating Chinese Fundamental Knowledge Capabilities of Large Language Models (2404.18359v1)
Abstract: In the burgeoning field of LLMs, the assessment of fundamental knowledge remains a critical challenge, particularly for models tailored to Chinese language and culture. This paper introduces FoundaBench, a pioneering benchmark designed to rigorously evaluate the fundamental knowledge capabilities of Chinese LLMs. FoundaBench encompasses a diverse array of 3354 multiple-choice questions across common sense and K-12 educational subjects, meticulously curated to reflect the breadth and depth of everyday and academic knowledge. We present an extensive evaluation of 12 state-of-the-art LLMs using FoundaBench, employing both traditional assessment methods and our CircularEval protocol to mitigate potential biases in model responses. Our results highlight the superior performance of models pre-trained on Chinese corpora, and reveal a significant disparity between models' reasoning and memory recall capabilities. The insights gleaned from FoundaBench evaluations set a new standard for understanding the fundamental knowledge of LLMs, providing a robust framework for future advancements in the field.
- Wei Li (1121 papers)
- Ren Ma (5 papers)
- Jiang Wu (58 papers)
- Chenya Gu (3 papers)
- Jiahui Peng (7 papers)
- Jinyang Len (1 paper)
- Songyang Zhang (116 papers)
- Hang Yan (86 papers)
- Dahua Lin (336 papers)
- Conghui He (114 papers)