Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MedAdapter: Efficient Test-Time Adaptation of Large Language Models towards Medical Reasoning (2405.03000v2)

Published 5 May 2024 in cs.CL and cs.AI

Abstract: Despite their improved capabilities in generation and reasoning, adapting LLMs to the biomedical domain remains challenging due to their immense size and corporate privacy. In this work, we propose MedAdapter, a unified post-hoc adapter for test-time adaptation of LLMs towards biomedical applications. Instead of fine-tuning the entire LLM, MedAdapter effectively adapts the original model by fine-tuning only a small BERT-sized adapter to rank candidate solutions generated by LLMs. Experiments demonstrate that MedAdapter effectively adapts both white-box and black-box LLMs in biomedical reasoning, achieving average performance improvements of 25.48% and 11.31%, respectively, without requiring extensive computational resources or sharing data with third parties. MedAdapter also yields superior performance when combined with train-time adaptation, highlighting a flexible and complementary solution to existing adaptation methods. Faced with the challenges of balancing model performance, computational resources, and data privacy, MedAdapter provides an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wenqi Shi (21 papers)
  2. Ran Xu (89 papers)
  3. Yuchen Zhuang (37 papers)
  4. Yue Yu (343 papers)
  5. Hang Wu (18 papers)
  6. Carl Yang (130 papers)
  7. May D. Wang (17 papers)
  8. Haotian Sun (13 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets