Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Much Off-The-Shelf Knowledge Is Transferable From Natural Images To Pathology Images? (2005.01609v3)

Published 24 Apr 2020 in eess.IV, cs.LG, q-bio.QM, and stat.ML

Abstract: Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The observation in this study encourages further investigation of specific metric and tools to quantify effectiveness and feasibility of transfer learning in future.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Xingyu Li (104 papers)
  2. Konstantinos N. Plataniotis (109 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.