Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks (2402.10597v1)

Published 16 Feb 2024 in cs.CL and cs.AI

Abstract: The entry of LLMs into research and commercial spaces has led to a trend of ever-larger models, with initial promises of generalisability, followed by a widespread desire to downsize and create specialised models without the need for complete fine-tuning, using Parameter Efficient Fine-tuning (PEFT) methods. We present an investigation into the suitability of different PEFT methods to clinical decision-making tasks, across a range of model sizes, including extremely small models with as few as $25$ million parameters. Our analysis shows that the performance of most PEFT approaches varies significantly from one task to another, with the exception of LoRA, which maintains relatively high performance across all model sizes and tasks, typically approaching or matching full fine-tuned performance. The effectiveness of PEFT methods in the clinical domain is evident, particularly for specialised models which can operate on low-cost, in-house computing infrastructure. The advantages of these models, in terms of speed and reduced training costs, dramatically outweighs any performance gain from large foundation LLMs. Furthermore, we highlight how domain-specific pre-training interacts with PEFT methods and model size, and discuss how these factors interplay to provide the best efficiency-performance trade-off. Full code available at: tbd.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Niall Taylor (8 papers)
  2. Upamanyu Ghose (1 paper)
  3. Omid Rohanian (12 papers)
  4. Mohammadmahdi Nouriborji (8 papers)
  5. Andrey Kormilitzin (22 papers)
  6. David Clifton (18 papers)
  7. Alejo Nevado-Holgado (14 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com