Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in Pathology Images (2207.06733v1)

Published 14 Jul 2022 in cs.CV

Abstract: Detectingandsegmentingobjectswithinwholeslideimagesis essential in computational pathology workflow. Self-supervised learning (SSL) is appealing to such annotation-heavy tasks. Despite the extensive benchmarks in natural images for dense tasks, such studies are, unfortunately, absent in current works for pathology. Our paper intends to narrow this gap. We first benchmark representative SSL methods for dense prediction tasks in pathology images. Then, we propose concept contrastive learning (ConCL), an SSL framework for dense pre-training. We explore how ConCL performs with concepts provided by different sources and end up with proposing a simple dependency-free concept generating method that does not rely on external segmentation algorithms or saliency detection models. Extensive experiments demonstrate the superiority of ConCL over previous state-of-the-art SSL methods across different settings. Along our exploration, we distll several important and intriguing components contributing to the success of dense pre-training for pathology images. We hope this work could provide useful data points and encourage the community to conduct ConCL pre-training for problems of interest. Code is available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiawei Yang (75 papers)
  2. Hanbo Chen (6 papers)
  3. Yuan Liang (31 papers)
  4. Junzhou Huang (137 papers)
  5. Lei He (121 papers)
  6. Jianhua Yao (50 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.