Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models (2401.07950v3)

Published 15 Jan 2024 in cs.CL

Abstract: LLMs have shown promise in assisting scientific discovery. However, such applications are currently limited by LLMs' deficiencies in understanding intricate scientific concepts, deriving symbolic equations, and solving advanced numerical calculations. To bridge these gaps, we introduce SciInstruct, a suite of scientific instructions for training scientific LLMs capable of college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs. We analyze the curated SciInstruct from multiple interesting perspectives (e.g., domain, scale, source, question type, answer length, etc.). To verify the effectiveness of SciInstruct, we fine-tuned different LLMs with SciInstruct, i.e., ChatGLM3 (6B and 32B), Llama3-8B-Instruct, and Mistral-7B: MetaMath, enhancing their scientific and mathematical reasoning capabilities, without sacrificing the language understanding capabilities of the base model. We release all codes and SciInstruct at https://github.com/THUDM/SciGLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dan Zhang (171 papers)
  2. Ziniu Hu (51 papers)
  3. Sining Zhoubian (4 papers)
  4. Zhengxiao Du (22 papers)
  5. Kaiyu Yang (24 papers)
  6. Zihan Wang (181 papers)
  7. Yisong Yue (154 papers)
  8. Yuxiao Dong (119 papers)
  9. Jie Tang (302 papers)
Citations (19)
X Twitter Logo Streamline Icon: https://streamlinehq.com