Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation (2308.06610v1)

Published 12 Aug 2023 in cs.CL and cs.AI

Abstract: Medical systematic reviews can be very costly and resource intensive. We explore how LLMs can support and be trained to perform literature screening when provided with a detailed set of selection criteria. Specifically, we instruction tune LLaMA and Guanaco models to perform abstract screening for medical systematic reviews. Our best model, Bio-SIEVE, outperforms both ChatGPT and trained traditional approaches, and generalises better across medical domains. However, there remains the challenge of adapting the model to safety-first scenarios. We also explore the impact of multi-task training with Bio-SIEVE-Multi, including tasks such as PICO extraction and exclusion reasoning, but find that it is unable to match single-task Bio-SIEVE's performance. We see Bio-SIEVE as an important step towards specialising LLMs for the biomedical systematic review process and explore its future developmental opportunities. We release our models, code and a list of DOIs to reconstruct our dataset for reproducibility.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ambrose Robinson (3 papers)
  2. William Thorne (3 papers)
  3. Ben P. Wu (2 papers)
  4. Abdullah Pandor (1 paper)
  5. Munira Essat (1 paper)
  6. Mark Stevenson (30 papers)
  7. Xingyi Song (30 papers)
Citations (4)