Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models (2305.08322v3)

Published 15 May 2023 in cs.CL
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models

Abstract: New NLP benchmarks are urgently needed to align with the rapid development of LLMs. We present C-Eval, the first comprehensive Chinese evaluation suite designed to assess advanced knowledge and reasoning abilities of foundation models in a Chinese context. C-Eval comprises multiple-choice questions across four difficulty levels: middle school, high school, college, and professional. The questions span 52 diverse disciplines, ranging from humanities to science and engineering. C-Eval is accompanied by C-Eval Hard, a subset of very challenging subjects in C-Eval that requires advanced reasoning abilities to solve. We conduct a comprehensive evaluation of the most advanced LLMs on C-Eval, including both English- and Chinese-oriented models. Results indicate that only GPT-4 could achieve an average accuracy of over 60%, suggesting that there is still significant room for improvement for current LLMs. We anticipate C-Eval will help analyze important strengths and shortcomings of foundation models, and foster their development and growth for Chinese users.

C-Eval: Assessing Foundation Models in a Chinese Context

The paper presents the first comprehensive Chinese evaluation suite for LLMs, known as C-Eval. Developed in response to the rapid evolution and growing capabilities of LLMs, C-Eval is designed to evaluate these models within a Chinese context, focusing on their advanced knowledge and reasoning skills. The suite encompasses a diverse range of disciplines, structured across four educational levels: middle school, high school, college, and professional. This evaluation framework includes both C-Eval and a particularly challenging subset, C-Eval Hard, to rigorously test the limits of advanced reasoning abilities.

Benchmark Composition and Creation

C-Eval is notable for its breadth, consisting of 13,948 multiple-choice questions sourced from a wide array of domains, including humanities, science, engineering, and more. The benchmark aims to reflect the real-world complexity and depth of Chinese culture and society, which is not adequately captured by simply translating existing English benchmarks. Hence, C-Eval emphasizes the evaluation of LLMs' performance on topics uniquely pertinent to Chinese users, such as local history, culture, and societal issues.

A critical aspect of the development was the data source selection to mitigate potential data contamination issues. The creators gathered questions from mock exams and local assessments, avoiding publicly available national exam questions that models might have been exposed to during training. This meticulous approach underscores the intention to provide an unbiased evaluation setting that genuinely assesses model competencies beyond prior exposure.

Evaluation of LLMs

The authors evaluated several state-of-the-art LLMs, including GPT-4, ChatGPT, and various Chinese-oriented models. Notably, only GPT-4 surpassed a 60% average accuracy, highlighting the benchmark's difficulty and the potential room for improvement in current models. Interestingly, results showed that while some Chinese-focused models, like GLM-130B, perform competitively in tasks related to Chinese knowledge, there remains a significant performance gap in more general reasoning tasks, exemplified by their inferior performance in STEM disciplines.

Implications and Future Directions

C-Eval's introduction offers several implications for the field of AI and natural language processing. Firstly, it provides a rigorous tool for evaluating the true capabilities of LLMs in non-English languages, an area that has been somewhat overlooked as the emphasis has largely been on English-language performance. Secondly, by highlighting deficiencies across various domains, C-Eval helps researchers and developers better understand where current models fall short and prioritize future improvements.

Looking forward, C-Eval serves as a call to action for the development and refinement of foundation models in diverse linguistic contexts. This benchmark not only aids in identifying the strengths and weaknesses of current LLMs but also encourages a contextual understanding of AI capabilities. Future iterations of such benchmarks may further explore additional languages and dialects or incorporate evaluation metrics beyond accuracy, such as robustness and ethical considerations, thereby broadening the scope and utility of these assessment tools.

The C-Eval initiative embodies a significant step towards inclusive AI development, reflecting a comprehensive understanding of LLMs capable of serving global audiences. This paper sets a foundation for further developing region-specific evaluations, promoting more equitable advancements in AI technologies worldwide.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yuzhen Huang (15 papers)
  2. Yuzhuo Bai (8 papers)
  3. Zhihao Zhu (11 papers)
  4. Junlei Zhang (8 papers)
  5. Jinghan Zhang (18 papers)
  6. Tangjun Su (1 paper)
  7. Junteng Liu (8 papers)
  8. Chuancheng Lv (3 papers)
  9. Yikai Zhang (41 papers)
  10. Jiayi Lei (7 papers)
  11. Yao Fu (83 papers)
  12. Maosong Sun (337 papers)
  13. Junxian He (66 papers)
Citations (405)