Dice Question Streamline Icon: https://streamlinehq.com

Understanding why Mahalanobis distance works for OOD detection and the role of representation geometry

Determine why Mahalanobis distance-based out-of-distribution detection— which models in-distribution features by fitting class-conditional Gaussian distributions and scores test inputs by the minimum Mahalanobis distance to class centroids—often performs well in practice, and ascertain how the geometry of high-dimensional feature representations influences this performance in deep vision models, to guide the design of more reliable detectors.

Information Square Streamline Icon: https://streamlinehq.com

Background

Mahalanobis distance is a widely used post-hoc OOD detector that fits class-conditional Gaussians to learned feature representations and flags inputs far from class centroids. Despite strong empirical performance on many benchmarks, the underlying reasons for its effectiveness and the contribution of feature-space geometry have not been fully characterized.

This paper conducts a large-scale empirical paper across diverse vision transformers and training regimes, links OOD performance to spectral and manifold metrics, and proposes a radially scaled ℓ2 normalization to reshape feature geometry. The authors explicitly note that the mechanisms behind Mahalanobis’ success and the role of high-dimensional geometry remain insufficiently understood, motivating their investigation.

References

While effective, it is not fully understood why this simple metric works so well or how the complex geometry of high-dimensional representations contributes to its success.

Dissecting Mahalanobis: How Feature Geometry and Normalization Shape OOD Detection (2510.15202 - Janiak et al., 17 Oct 2025) in Section 1, Introduction