Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust White Matter Hyperintensity Segmentation on Unseen Domain (2102.06650v2)

Published 12 Feb 2021 in cs.CV

Abstract: Typical machine learning frameworks heavily rely on an underlying assumption that training and test data follow the same distribution. In medical imaging which increasingly begun acquiring datasets from multiple sites or scanners, this identical distribution assumption often fails to hold due to systematic variability induced by site or scanner dependent factors. Therefore, we cannot simply expect a model trained on a given dataset to consistently work well, or generalize, on a dataset from another distribution. In this work, we address this problem, investigating the application of machine learning models to unseen medical imaging data. Specifically, we consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution. That is, we train on samples from a set of distributions (sources) and test on samples from a new, unseen distribution (target). We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Segmentation Challenge dataset and our local in-house dataset. We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy. Then, we show drastic improvements of WMH prediction on an unseen target domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xingchen Zhao (18 papers)
  2. Anthony Sicilia (21 papers)
  3. Davneet Minhas (2 papers)
  4. Erin O'Connor (2 papers)
  5. Howard Aizenstein (2 papers)
  6. William Klunk (2 papers)
  7. Dana Tudorascu (2 papers)
  8. Seong Jae Hwang (32 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.