Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DIsoN: Decentralized Isolation Networks for Out-of-Distribution Detection in Medical Imaging (2506.09024v1)

Published 10 Jun 2025 in cs.CV and cs.LG

Abstract: Safe deployment of ML models in safety-critical domains such as medical imaging requires detecting inputs with characteristics not seen during training, known as out-of-distribution (OOD) detection, to prevent unreliable predictions. Effective OOD detection after deployment could benefit from access to the training data, enabling direct comparison between test samples and the training data distribution to identify differences. State-of-the-art OOD detection methods, however, either discard training data after deployment or assume that test samples and training data are centrally stored together, an assumption that rarely holds in real-world settings. This is because shipping training data with the deployed model is usually impossible due to the size of training databases, as well as proprietary or privacy constraints. We introduce the Isolation Network, an OOD detection framework that quantifies the difficulty of separating a target test sample from the training data by solving a binary classification task. We then propose Decentralized Isolation Networks (DIsoN), which enables the comparison of training and test data when data-sharing is impossible, by exchanging only model parameters between the remote computational nodes of training and deployment. We further extend DIsoN with class-conditioning, comparing a target sample solely with training data of its predicted class. We evaluate DIsoN on four medical imaging datasets (dermatology, chest X-ray, breast ultrasound, histopathology) across 12 OOD detection tasks. DIsoN performs favorably against existing methods while respecting data-privacy. This decentralized OOD detection framework opens the way for a new type of service that ML developers could provide along with their models: providing remote, secure utilization of their training data for OOD detection services. Code will be available upon acceptance at: *****

Summary

  • The paper introduces DIsoN, a novel neural isolation network that detects out-of-distribution samples by monitoring the binary classification convergence rate.
  • The paper employs a federated learning-inspired method to ensure privacy by exchanging model parameters instead of raw data.
  • The paper demonstrates enhanced separation between in-distribution and out-of-distribution samples across multiple medical imaging datasets.

Decentralized Isolation Networks for Out-of-Distribution Detection in Medical Imaging

The paper presents a novel approach to out-of-distribution (OOD) detection in medical imaging using Decentralized Isolation Networks (DIsoN). It addresses a significant challenge faced by modern machine learning models in safety-critical domains: the inability to reliably detect samples that differ from the training data, a task crucial for ensuring dependable outcomes in clinical settings. The authors propose a framework that sidesteps the common obstacle of needing centralized access to training data, a requirement often impractical due to privacy and proprietary constraints.

Methodology and Innovation

DIsoN operates by training an isolation network to solve a binary classification task, assessing the difficulty of separating a target test sample from in-distribution (ID) source samples. The convergence rate of this binary classification defines the OOD score; faster isolation suggests the sample is OOD. This approach draws inspiration from Isolation Forests, which isolate samples by measuring the number of divisions needed in a decision tree. However, unlike Isolation Forests, DIsoN uses a neural network, providing greater flexibility and power in pattern recognition.

The framework is decentralized, leveraging a federated learning-like setup. It involves two entities: the Source Node, which holds the training data and a pre-trained model, and the Target Node, which processes the test samples. The key advancement is the ability to perform the isolation task without transferring raw data between nodes, only exchanging model parameters, thus respecting privacy constraints.

Additionally, DIsoN introduces a class-conditional variant (CC-DIsoN), where the test sample is compared solely against training samples of the same predicted class. This refinement aims to improve OOD detection by reducing variability within the training sample distribution, making it harder for ID samples to be erroneously classified as OOD.

Experimental Results and Significance

The authors conduct comprehensive evaluations across four medical imaging datasets (dermatology, chest X-ray, breast ultrasound, histopathology) and 12 OOD tasks. DIsoN and CC-DIsoN exhibited favorable performance compared to existing state-of-the-art methods, notably in precise artifact detection and semantic/covariate shift tasks, which are crucial in medical diagnostics.

The paper highlights the benefits of class-conditioning, achieving better separation of ID and OOD samples. Furthermore, practical techniques like instance normalization and stochastic data augmentation play a significant role in enhancing the model's robustness against overfitting and ensuring the isolation network focuses on meaningful features relevant to determining distribution shifts.

Implications and Future Directions

The implications of this work are substantial for the deployment of AI in medical imaging. The DIsoN framework opens up possibilities for secure, privacy-compliant utilization of training data for OOD detection services, heralding a new type of service that ML developers could offer. This paradigm supports safe integration of AI tools into clinical workflows, potentially mitigating risks associated with undetected distributional shifts, thus enhancing the reliability of AI-powered diagnostics.

For future developments, the authors suggest exploring enhancements in efficiency, such as methods to handle multiple test samples concurrently, reducing computational overhead. Moreover, the framework's adaptability to other domains outside medical imaging could be investigated, expanding its utility across diverse AI applications where data privacy is pivotal.

Overall, DIsoN represents a compelling advancement in OOD detection methodologies, offering a practical and theoretically sound solution to distribution shift challenges in the critical domain of medical imaging.