MedAdapter: Efficient Test-Time Adaptation of Large Language Models towards Medical Reasoning (2405.03000v2)
Abstract: Despite their improved capabilities in generation and reasoning, adapting LLMs to the biomedical domain remains challenging due to their immense size and corporate privacy. In this work, we propose MedAdapter, a unified post-hoc adapter for test-time adaptation of LLMs towards biomedical applications. Instead of fine-tuning the entire LLM, MedAdapter effectively adapts the original model by fine-tuning only a small BERT-sized adapter to rank candidate solutions generated by LLMs. Experiments demonstrate that MedAdapter effectively adapts both white-box and black-box LLMs in biomedical reasoning, achieving average performance improvements of 25.48% and 11.31%, respectively, without requiring extensive computational resources or sharing data with third parties. MedAdapter also yields superior performance when combined with train-time adaptation, highlighting a flexible and complementary solution to existing adaptation methods. Faced with the challenges of balancing model performance, computational resources, and data privacy, MedAdapter provides an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain.
- Wenqi Shi (21 papers)
- Ran Xu (89 papers)
- Yuchen Zhuang (37 papers)
- Yue Yu (343 papers)
- Hang Wu (18 papers)
- Carl Yang (130 papers)
- May D. Wang (17 papers)
- Haotian Sun (13 papers)