Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Partly Supervised Multitask Learning (2005.02523v1)

Published 5 May 2020 in cs.CV

Abstract: Semi-supervised learning has recently been attracting attention as an alternative to fully supervised models that require large pools of labeled data. Moreover, optimizing a model for multiple tasks can provide better generalizability than single-task learning. Leveraging self-supervision and adversarial training, we propose a novel general purpose semi-supervised, multiple-task model---namely, self-supervised, semi-supervised, multitask learning (S$4$MTL)---for accomplishing two important tasks in medical imaging, segmentation and diagnostic classification. Experimental results on chest and spine X-ray datasets suggest that our S$4$MTL model significantly outperforms semi-supervised single task, semi/fully-supervised multitask, and fully-supervised single task models, even with a 50\% reduction of class and segmentation labels. We hypothesize that our proposed model can be effective in tackling limited annotation problems for joint training, not only in medical imaging domains, but also for general-purpose vision tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Abdullah-Al-Zubaer Imran (20 papers)
  2. Chao Huang (244 papers)
  3. Hui Tang (61 papers)
  4. Wei Fan (160 papers)
  5. Yuan Xiao (14 papers)
  6. Dingjun Hao (1 paper)
  7. Zhen Qian (39 papers)
  8. Demetri Terzopoulos (44 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.