Papers
Topics
Authors
Recent
Search
2000 character limit reached

Small Language Models Can Use Nuanced Reasoning For Health Science Research Classification: A Microbial-Oncogenesis Case Study

Published 6 Dec 2025 in cs.CE and q-bio.QM | (2512.06502v1)

Abstract: Artificially intelligent (AI) co-scientists must be able to sift through research literature cost-efficiently while applying nuanced scientific reasoning. We evaluate Small LLMs (SLMs, <= 8B parameters) for classifying medical research papers. Using literature on the oncogenic potential of HMTV/MMTV-like viruses in breast cancer as a case study, we assess model performance with both zero-shot and in-context learning (ICL; few-shot prompting) strategies against frontier proprietary LLMs. Llama 3 and Qwen2.5 outperform GPT-5 (API, low/high effort), Gemini 3 Pro Preview, and Meerkat in zero-shot settings, though trailing Gemini 2.5 Pro. ICL leads to improved performance on a case-by-case basis, allowing Llama 3 and Qwen2.5 to match Gemini 2.5 Pro in binary classification. Systematic lexical-ablation experiments show that SLM decisions are often grounded in valid scientific cues but can be influenced by spurious textual artifacts, underscoring need for interpretability in high-stakes pipelines. Our results reveal both promise and limitations of modern SLMs for scientific triage; pairing SLMs with simple but principled prompting strategies can approach performance of the strongest LLMs for targeted literature filtering in co-scientist pipelines.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.