Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 80 tok/s
Gemini 2.5 Pro 60 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

SciReasoner: Unified Scientific Reasoning Model

Updated 28 September 2025
  • SciReasoner is a unified scientific reasoning foundation model that bridges natural language with formats such as SMILES, DNA, and more, enabling cross-disciplinary applications.
  • It employs a transformer-based architecture pre-trained on a 206B-token corpus from scientific databases, ensuring robust chain-of-thought reasoning and property prediction.
  • The model integrates supervised instruction tuning and reinforcement learning to achieve high-fidelity translation, entity extraction, and generative design across diverse scientific domains.

SciReasoner is a scientific reasoning foundation model engineered to align natural language with heterogeneous scientific representations spanning chemistry, biology, genomics, and materials science. The model is designed to support translation between human and scientific formats, knowledge extraction, property prediction and classification, and both unconditional and conditional sequence generation and design. Its architecture, training regimes, and evaluation protocols are specified to enable cross-domain generalization, robust long-form chain-of-thought reasoning, and fidelity in complex scientific workflows (Wang et al., 25 Sep 2025).

1. Model Architecture and Training Pipeline

SciReasoner employs a unified transformer-based backbone built upon Qwen3 variants at 1.7B and 8B parameters. The architecture is designed to natively ingest multiple modalities: natural language, pure sequences (such as DNA, RNA, and protein), and structured chemical or material representations (including SMILES, IUPAC, UNIREF, SELFIES, and others).

Pretraining Regimen

  • Trained on a 206B-token corpus extracted from PubMed, PubChem, NCBI (DNA/RNA), UniRef (proteins), curated scientific textbooks, and materials datasets.
  • Data types include raw text, domain-specific sequences, and diverse sequence–text and sequence–sequence pairings.
  • To ensure discipline specificity, inputs are tagged with domain markers (e.g., <SMILES>, <dna>, <protein>), and sample dialogue formats are standardized across tasks.

Supervised Instruction Tuning (SFT)

  • The model is aligned via SFT on 40M instruction-style QA tasks, covering chemical reactions, sequence analysis, property prediction/classification, and translation between formats.
  • Dialogue I/O formats unify multimodal tasks to encourage operation on a shared representational substrate, supporting both direct-answer and chain-of-thought protocol emission.

Reinforcement Learning and Cold-Start Bootstrapping

  • A selective annealed “cold-start” procedure targets difficult tasks, promoting long-form chain-of-thought generation (“thinking”) and preserving efficient responses for simpler tasks (“instant tasks”).
  • Reinforcement learning is applied using multiple stochastic rollouts; group-standardized advantage aggregation is used:

A^i,t=Rimean({Ri})std({Ri})\hat{A}_{i,t} = \frac{R_i - \text{mean}(\{ R_i \})}{\text{std}(\{ R_i \})}

A PPO-style objective with an asymmetric clipping schedule regulates policy updates.

  • Engineering features include mixed-precision (bfloat16), DeepSpeed ZeRO Stage 2, FlashAttention, and a Liger kernel for efficient tokenization.

2. Translation, Extraction, and Cross-Format Fidelity

SciReasoner supports high-fidelity, bidirectional translation between text and scientific sequence/formats (e.g., SMILES, IUPAC, SELFIES, DNA/RNA/protein sequences). These translation tasks require precise mapping that preserves chirality, stoichiometry, and sequential integrity. The model is additionally tuned for entity extraction and knowledge synthesis:

  • Text and knowledge extraction modules perform entity recognition (e.g., protein, chemical, or gene names), relation detection (chemical–protein/protein–disease), and extraction of properties from domain text.
  • Unified I/O schema enables seamless alternation between structured extraction (e.g., JSON representations) and unstructured narrative.

This unified approach is intended to broaden instruction coverage and minimize the need for domain-specific specialist models.

3. Property Prediction, Classification, and Generative Design

The model provides a single interface for both regression and classification tasks:

  • Property Prediction: Supports continuous value estimation (e.g., molecular solubility/logD, band gap energies, ribosome loading).
  • Property Classification: Assigns categorical property labels, such as blood–brain barrier permeability (BBBP), protein solubility, or material stability.

For generative tasks:

  • Unconditional sequence generation enables the creation of novel biomolecules, polymers, or materials, unconstrained by direct context.
  • Conditional design supports target-driven generation (e.g., producing a molecule with a specified biological activity or property), with conditioning on structured constraints or natural language prompts.

By integrating such tasks into a coherent workflow, SciReasoner can perform multistep reasoning, deep extraction, and even hypothesis-driven candidate design in biomedical and materials contexts.

4. Cross-Discipline Learning and Generalization

A principal distinction in SciReasoner is its emphasis on cross-discipline learning:

  • The model is pre-trained on a heterogeneous mixture spanning chemical, biological, materials, and sequence data, explicitly unified through marker-based tokenization and dialogue normalization.
  • Ablation studies demonstrate that pretraining on this heterogeneous corpus—rather than splitting by discipline—significantly improves cross-domain transfer and generalization (for example, reductions of up to 99% in RMSE on ESOL regression tasks are shown in ablation).
  • This shared representation allows property prediction or translation strategies learned in one domain (e.g., chemical) to transfer to others (e.g., materials, genomics).

Consistent performance gains are reported in knowledge extraction, retrosynthesis, biological property inference, and other tasks, compared to both closed-source (e.g., Gemini-2.5-pro, GPT-o3/oss) and earlier specialist LLMs.

5. Data Curation, Evaluation Flow, and Open Resources

Extensive curation ensures reliable coverage:

  • Pretraining and SFT samples are sourced with scriptable pipelines, discipline-specific annotation, and LLM-verified task decomposition.
  • For reinforcement learning, a “correct-only” chain-of-thought corpus generation regime is used, ensuring only empirically solvable medium-difficulty samples are kept.
  • Evaluations report task-level statistics (accuracy, F1, AUC, RMSE, ROUGE-L, SMACT, etc.) for over 100 benchmarks.
  • Data curation includes rule-based and LLM-guided extraction/annotation.

The model checkpoint, SFT datasets, and evaluation code are open-sourced at https://huggingface.co/SciReason and https://github.com/open-sciencelab/SciReason.

6. Comparative Assessment and Limitations

Compared with specialist and closed-source LLMs, SciReasoner demonstrates:

  • Broadened task coverage, encompassing 103 workflows ranging from translation and property inference to generative design.
  • Enhanced cross-domain generalization, attributed to shared representation learning and tokenization strategy.
  • Higher fidelity in translation and property tasks, with task-aware instruction tuning and explicit evaluation on discipline bridges.
  • Robust transfer and downstream reliability, with deliberate reinforcement learning and long-form CoT instilled via annealed cold-start protocols.

Potential limitations noted include reliance on continued curation for data completeness, the need for discipline-specific evaluation metrics for future tasks, and the ongoing requirement for scaling context lengths (Liger kernel supports 8k context, but future workflows may demand more).

7. Significance and Outlook

SciReasoner establishes a foundation for unified, instruction-driven scientific reasoning across disciplines, replacing compartmentalized approaches that require separate models per scientific domain. By coupling large-scale, heterogeneous pretraining with discipline-aware, chain-of-thought-enhanced instruction tuning and RL, SciReasoner offers a platform for knowledge extraction, property modeling, and scientific generation that is both robust to task variations and adaptive to emerging evaluation protocols. The open-source release of models, data, and evaluation code enables broad adoption and future extension within the research community (Wang et al., 25 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to SciReasoner.