Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion Segmentation (2308.04911v1)

Published 9 Aug 2023 in cs.CV and cs.AI

Abstract: Medical image analysis using deep learning is often challenged by limited labeled data and high annotation costs. Fine-tuning the entire network in label-limited scenarios can lead to overfitting and suboptimal performance. Recently, prompt tuning has emerged as a more promising technique that introduces a few additional tunable parameters as prompts to a task-agnostic pre-trained model, and updates only these parameters using supervision from limited labeled data while keeping the pre-trained model unchanged. However, previous work has overlooked the importance of selective labeling in downstream tasks, which aims to select the most valuable downstream samples for annotation to achieve the best performance with minimum annotation cost. To address this, we propose a framework that combines selective labeling with prompt tuning (SLPT) to boost performance in limited labels. Specifically, we introduce a feature-aware prompt updater to guide prompt tuning and a TandEm Selective LAbeling (TESLA) strategy. TESLA includes unsupervised diversity selection and supervised selection using prompt-based uncertainty. In addition, we propose a diversified visual prompt tuning strategy to provide multi-prompt-based discrepant predictions for TESLA. We evaluate our method on liver tumor segmentation and achieve state-of-the-art performance, outperforming traditional fine-tuning with only 6% of tunable parameters, also achieving 94% of full-data performance by labeling only 5% of the data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Fan Bai (38 papers)
  2. Ke Yan (102 papers)
  3. Xiaoyu Bai (14 papers)
  4. Xinyu Mao (12 papers)
  5. Xiaoli Yin (7 papers)
  6. Jingren Zhou (198 papers)
  7. Yu Shi (153 papers)
  8. Le Lu (148 papers)
  9. Max Q. -H. Meng (80 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.