Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models (2406.09098v3)

Published 13 Jun 2024 in cs.CL

Abstract: LLMs have gained increasing prominence in scientific research, but there is a lack of comprehensive benchmarks to fully evaluate their proficiency in understanding and mastering scientific knowledge. To address this need, we introduce the SciKnowEval benchmark, a novel framework that systematically evaluates LLMs across five progressive levels of scientific knowledge: studying extensively, inquiring earnestly, thinking profoundly, discerning clearly, and practicing assiduously. These levels aim to assess the breadth and depth of scientific knowledge in LLMs, including memory, comprehension, reasoning, discernment, and application. Specifically, we first construct a large-scale evaluation dataset encompassing 70K multi-level scientific problems and solutions in the domains of biology, chemistry, physics, and materials science. By leveraging this dataset, we benchmark 26 advanced open-source and proprietary LLMs using zero-shot and few-shot prompting strategies. The results reveal that despite the state-of-the-art performance of proprietary LLMs, there is still significant room for improvement, particularly in addressing scientific reasoning and applications. We anticipate that SciKnowEval will establish a standard for benchmarking LLMs in science research and promote the development of stronger scientific LLMs. The dataset and code are publicly available at https://scimind.ai/sciknoweval .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kehua Feng (7 papers)
  2. Keyan Ding (18 papers)
  3. Weijie Wang (37 papers)
  4. Xiang Zhuang (10 papers)
  5. Zeyuan Wang (14 papers)
  6. Ming Qin (9 papers)
  7. Yu Zhao (207 papers)
  8. Jianhua Yao (50 papers)
  9. Qiang Zhang (466 papers)
  10. Huajun Chen (198 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets