Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Out-of-Distribution Detection for Dermoscopic Image Classification (2104.07819v2)

Published 15 Apr 2021 in cs.CV and cs.LG

Abstract: Medical image diagnosis can be achieved by deep neural networks, provided there is enough varied training data for each disease class. However, a hitherto unknown disease class not encountered during training will inevitably be misclassified, even if predicted with low probability. This problem is especially important for medical image diagnosis, when an image of a hitherto unknown disease is presented for diagnosis, especially when the images come from the same image domain, such as dermoscopic skin images. Current out-of-distribution detection algorithms act unfairly when the in-distribution classes are imbalanced, by favouring the most numerous disease in the training sets. This could lead to false diagnoses for rare cases which are often medically important. We developed a novel yet simple method to train neural networks, which enables them to classify in-distribution dermoscopic skin disease images and also detect novel diseases from dermoscopic images at test time. We show that our BinaryHeads model not only does not hurt classification balanced accuracy when the data is imbalanced, but also consistently improves the balanced accuracy. We also introduce an important method to investigate the effectiveness of out-of-distribution detection methods based on presence of varying amounts of out-of-distribution data, which may arise in real-world settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mohammadreza Mohseni (2 papers)
  2. Jordan Yap (3 papers)
  3. William Yolland (5 papers)
  4. Majid Razmara (2 papers)
  5. M Stella Atkins (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.