Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Energy-based Out-of-distribution Detection (2010.03759v4)

Published 8 Oct 2020 in cs.LG and cs.AI

Abstract: Determining whether inputs are out-of-distribution (OOD) is an essential building block for safely deploying machine learning models in the open world. However, previous methods relying on the softmax confidence score suffer from overconfident posterior distributions for OOD data. We propose a unified framework for OOD detection that uses an energy score. We show that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores. Unlike softmax confidence scores, energy scores are theoretically aligned with the probability density of the inputs and are less susceptible to the overconfidence issue. Within this framework, energy can be flexibly used as a scoring function for any pre-trained neural classifier as well as a trainable cost function to shape the energy surface explicitly for OOD detection. On a CIFAR-10 pre-trained WideResNet, using the energy score reduces the average FPR (at TPR 95%) by 18.03% compared to the softmax confidence score. With energy-based training, our method outperforms the state-of-the-art on common benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Weitang Liu (14 papers)
  2. Xiaoyun Wang (21 papers)
  3. John D. Owens (36 papers)
  4. Yixuan Li (183 papers)
Citations (1,129)

Summary

  • The paper introduces energy scores instead of softmax outputs to better distinguish in-distribution from out-of-distribution samples.
  • The paper demonstrates that using energy scores reduces the false positive rate by up to 18.03% on CIFAR-10 compared to traditional methods.
  • The paper employs energy-bounded fine-tuning with auxiliary OOD data to enhance model reliability in safety-critical applications.

Energy-based Out-of-distribution Detection

In the field of machine learning, detecting out-of-distribution (OOD) inputs is crucial for deploying models in real-world applications. Traditional methodologies, relying on the softmax confidence scores of neural networks, often fail by assigning high confidence to OOD samples, which can be detrimental in safety-critical systems. The paper "Energy-based Out-of-distribution Detection" by Weitang Liu et al. proposes a novel framework that utilizes an energy score for OOD detection to address these shortcomings. This essay provides an expert overview of the paper, highlighting its methodology, results, and implications for future research and applications.

Methodology

The core proposal of the paper is to replace traditional softmax confidence scores with energy scores for OOD detection. This shift is grounded in the theoretical alignment of energy scores with the probability density of input data, making them more reliable for distinguishing between in-distribution (ID) and OOD samples.

Energy-based Models (EBMs):

The energy function, $E(\*x; f)$, maps each input sample $\*x$ to a scalar energy value, where lower values correspond to higher probability densities (i.e., likely ID samples) and higher values to lower densities (i.e., likely OOD samples). The energy score for a given input $\*x$ is defined as: $E(\*x; f) = -T \cdot \log \sum_{i=1}^K e^{f_i(\*x)/T},$ where TT is the temperature parameter, and $f(\*x)$ represents the logits of the neural network.

Inference-time Implementation:

For pre-trained models, the energy score is utilized directly without any retraining. This makes the approach versatile and easy to integrate with existing classifiers. The OOD detection decision rule is formulated based on a threshold τ\tau: $g(\*x; \tau, f) = \begin{cases} \text{ID} & \text{if } -E(\*x; f) \leq \tau, \ \text{OOD} & \text{if } -E(\*x; f) > \tau. \end{cases}$

Energy-bounded Learning:

The paper also introduces an energy-bounded fine-tuning objective. By exploiting auxiliary OOD data, the model explicitly shapes the energy landscape so that ID samples exhibit lower energy values compared to OOD samples. This is achieved through a regularization loss: $L_\text{energy} = \mathbb{E}_{(\*x_\text{in},y) \sim \mathcal{D}_{\text{in}}^{\text{train}}} [ (\max(0, E(\*x_\text{in}) - m_\text{in}))^2 ] + \mathbb{E}_{\*x_\text{out} \sim \mathcal{D}_{\text{out}}^{\text{train}}} [ (\max(0, m_\text{out} - E(\*x_\text{out})))^2 ],$ where minm_\text{in} and moutm_\text{out} are margin parameters for ID and OOD energies, respectively.

Results

The empirical results are compelling. Using WideResNet models, the paper demonstrates that energy scores substantially outperform softmax confidence scores on several OOD benchmarks (e.g., iSUN, Places365, Texture, SVHN, CIFAR-10, and LSUN). Specifically, the energy score reduced the False Positive Rate (FPR) at 95% True Positive Rate (TPR) by 18.03% on CIFAR-10 compared to softmax confidence scores. Furthermore, energy-bounded fine-tuning outperformed state-of-the-art methods such as Outlier Exposure (OE), with fine-tuned models demonstrating reductions in FPR by significant margins (e.g., 5.20% improvement over OE for CIFAR-10).

Discussion and Implications

The theoretical and empirical underpinnings of the energy-based approach significantly elevate its applicability in OOD detection tasks. By moving away from the softmax posterior's susceptibility to overconfidence, the energy score aligns more closely with the intrinsic data distribution, offering a robust alternative for safety-critical machine learning deployments.

Theoretical Contributions:

The paper clearly demonstrates the biases inherent in softmax-based OOD detection and mathematically justifies the superiority of energy scores. This foundation can inspire further theoretical explorations in bridging the gap between discriminative and generative modeling for robust OOD detection.

Practical Applications:

The parameter-free nature of energy scores for pre-trained models simplifies the deployment for various practical applications. The energy-bounded fine-tuning approach, leveraging auxiliary OOD data, can lead to more reliable systems, especially in environments where it is critical to detect anomalies, such as autonomous vehicles and medical diagnostics.

Future Developments:

The promising results open avenues for extending this framework to other machine learning tasks beyond image classification, including but not limited to speech recognition, natural language processing, and active learning frameworks. Additional research could investigate the interplay between temperature scaling and energy scores further to optimize performance across diverse datasets.

Conclusion

"Energy-based Out-of-distribution Detection" introduces a significant advancement in the methodology for AI safety and reliability. By leveraging energy scores, which inherently align with the probability density of data, the proposed framework offers a theoretically sound and practically effective solution for OOD detection. The improvements over traditional methods, both in theoretical clarity and empirical performance, make this approach a valuable contribution to the field of machine learning. As this research progresses, it may serve as a cornerstone for developing more dependable AI systems in various domains.