Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research (2308.13149v2)

Published 25 Aug 2023 in cs.CL

Abstract: Recently, there has been growing interest in using LLMs for scientific research. Numerous benchmarks have been proposed to evaluate the ability of LLMs for scientific research. However, current benchmarks are mostly based on pre-collected objective questions. This design suffers from data leakage problem and lacks the evaluation of subjective Q/A ability. In this paper, we propose SciEval, a comprehensive and multi-disciplinary evaluation benchmark to address these issues. Based on Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate scientific research ability. In particular, we design a "dynamic" subset based on scientific principles to prevent evaluation from potential data leakage. Both objective and subjective questions are included in SciEval. These characteristics make SciEval a more effective benchmark for scientific research ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs show that, although GPT-4 achieves SOTA performance compared to other LLMs, there is still substantial room for improvement, especially for dynamic questions. The codes and data are publicly available on https://github.com/OpenDFM/SciEval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Liangtai Sun (8 papers)
  2. Yang Han (62 papers)
  3. Zihan Zhao (37 papers)
  4. Da Ma (28 papers)
  5. Zhennan Shen (4 papers)
  6. Baocai Chen (2 papers)
  7. Lu Chen (244 papers)
  8. Kai Yu (201 papers)
Citations (47)