Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Less Could Be Better: Parameter-efficient Fine-tuning Advances Medical Vision Foundation Models (2401.12215v1)

Published 22 Jan 2024 in cs.CV

Abstract: Parameter-efficient fine-tuning (PEFT) that was initially developed for exploiting pre-trained LLMs has recently emerged as an effective approach to perform transfer learning on computer vision tasks. However, the effectiveness of PEFT on medical vision foundation models is still unclear and remains to be explored. As a proof of concept, we conducted a detailed empirical study on applying PEFT to chest radiography foundation models. Specifically, we delved into LoRA, a representative PEFT method, and compared it against full-parameter fine-tuning (FFT) on two self-supervised radiography foundation models across three well-established chest radiograph datasets. Our results showed that LoRA outperformed FFT in 13 out of 18 transfer learning tasks by at most 2.9% using fewer than 1% tunable parameters. Combining LoRA with foundation models, we set up new state-of-the-art on a range of data-efficient learning tasks, such as an AUROC score of 80.6% using 1% labeled data on NIH ChestX-ray14. We hope this study can evoke more attention from the community in the use of PEFT for transfer learning on medical imaging tasks. Code and models are available at https://github.com/RL4M/MED-PEFT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chenyu Lian (5 papers)
  2. Hong-Yu Zhou (50 papers)
  3. Yizhou Yu (148 papers)
  4. Liansheng Wang (48 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub