Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges (2011.06225v4)

Published 12 Nov 2020 in cs.LG, cs.AI, and cs.CV

Abstract: Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes. It can be applied to solve a variety of real-world applications in science and engineering. Bayesian approximation and ensemble learning techniques are two most widely-used UQ methods in the literature. In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image processing (e.g., image restoration), medical image analysis (e.g., medical image classification and segmentation), natural language processing (e.g., text classification, social media texts and recidivism risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ methods used in deep learning. Moreover, we also investigate the application of these methods in reinforcement learning (RL). Then, we outline a few important applications of UQ methods. Finally, we briefly highlight the fundamental research challenges faced by UQ methods and discuss the future research directions in this field.

The paper "A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges" presents a comprehensive survey of recent methods for uncertainty quantification (UQ) in deep learning (DL). The authors discuss various UQ techniques, including Bayesian approximation and ensemble learning, and their applications across diverse fields. The paper also highlights research challenges and future directions within the UQ domain.

The paper begins by defining the two primary types of uncertainty: aleatoric and epistemic. Aleatoric uncertainty, also known as data uncertainty, arises from irreducible noise in the data. Epistemic uncertainty, also known as knowledge uncertainty, stems from a lack of knowledge or data. The authors emphasize the importance of quantifying these uncertainties to improve the trustworthiness and accuracy of AI systems.

The paper then provides a detailed review of UQ methods based on Bayesian techniques:

  • Bayesian Neural Networks (BNNs) are introduced as a way to represent model parameters as probability distributions, offering robustness against overfitting.
  • Monte Carlo Dropout (MCD) is presented as an efficient approximation to Bayesian inference, using dropout as a regularization technique to estimate prediction uncertainty. The loss function with L2L_2 regularization is expressed as:

    Ldropout:=1Ni=1NE(yi,yi^)+λl=1L(Wi22+bi22)\mathcal{L}_{dropout}:=\frac{1}{N}\sum_{i=1}^{N} E(y_i,\hat{y_i})+\lambda \sum_{l=1}^{L}(\|W_i\|_2^2+\|b_i\|_2^2)

    where:

    • NN is the number of samples
    • E(yi,yi^)E(y_i, \hat{y_i}) is the error between the true value yiy_i and the predicted value yi^\hat{y_i}
    • λ\lambda is the regularization parameter
    • LL is the number of layers in the neural network
    • WiW_i is the weight matrix for layer ii
    • bib_i is the bias vector for layer ii
  • Markov Chain Monte Carlo (MCMC) methods are discussed for approximating posterior distributions, with a focus on Stochastic Gradient MCMC (SG-MCMC) for training DNNs.
  • Variational Inference (VI) is presented as an optimization-based approach to approximate posterior distributions in BNNs. The loss is defined as:

    L(Φ)12Di=1DLR(y(i),x(i))+1DKL(qϕ(w)p(w))\mathcal{L}(\Phi)\approx \frac{1}{2|\mathcal{D}|}\sum_{i=1}^{|\mathcal{D}|} \mathcal{L}_{R} (y^{(i)},x^{(i)})+\frac{1}{|\mathcal{D}|} KL(q_{\phi}(w)\|p(w))

    where:

    • D|\mathcal{D}| is the number of samples
    • LR(y,x)\mathcal{L}_{R} (y,x) is the reconstruction loss
    • KL(qϕ(w)p(w))KL(q_{\phi}(w)\|p(w)) is the Kullback-Leibler divergence between the approximate posterior qϕ(w)q_{\phi}(w) and the prior p(w)p(w)
  • Bayesian Active Learning (BAL) is introduced as a method to select the most informative unlabeled samples for annotation, improving learning efficiency.
  • Bayes by Backprop (BBB) is described as an algorithm for quantifying uncertainty in neural network weights by learning a probability distribution over the weights.
  • Variational Autoencoders (VAEs) are presented as generative models for learning representations and modeling posterior distributions.

The paper also reviews other UQ methods beyond Bayesian techniques:

  • Deep Gaussian Processes (DGPs) are discussed as multi-layer models for accurate uncertainty modeling.
  • Laplace Approximations (LAs) are presented as a way to approximate Bayesian inference by building a Gaussian distribution around the maximum a posteriori (MAP) estimate.

The application of UQ in Reinforcement Learning (RL) is explored, where uncertainty plays a critical role in decision-making. The paper highlights the use of Bayesian methods and ensemble techniques for quantifying uncertainty in RL agents. The authors discuss Bayesian Policy Optimization (BPO) method applied to Partially Observable Markov Decision Processes (POMDPs) as a Bayes filter to compute the belief bb of the hidden state as follows:

b(s)=ψ(b,a,o)=ηsSb(s)T(s,a,s)Z(s,a,o)b^\prime(s^\prime) = \psi(b, a^\prime, o^\prime) = \eta \sum_{s\in S}b(s)T(s, a^\prime, s^\prime)Z(s, a^\prime, o^\prime)

where:

  • bb' is the updated belief state
  • bb is the previous belief state
  • aa' is the action
  • oo' is the observation
  • η\eta is a normalization factor
  • SS is the state space
  • TT is the transition function
  • ZZ is the observation function

Ensemble techniques are also discussed as a means to improve predictive performance and quantify uncertainty. The authors describe how ensembles can capture different sources of model uncertainty and provide more reliable estimates. The total uncertainty can be decomposed into expected data uncertainty and knowledge uncertainty via Mutual Information (MI) formulation as follows:

MI[y,θx,D]KnowledgeUncertainty=H[Ep(θD)[P(yx,θ)]]TotalUncertaintyEp(θD)[H[P(yx,θ)]]ExpectedDataUncertainty\underbrace{\mathcal MI[y,\theta|x^\star,D]}_{Knowledge \: Uncertainty} = \underbrace{H[\mathbb E_{p(\theta|\mathcal{D})}[P(y|x^\star,\theta)]] }_{Total\: Uncertainty}- \underbrace{\mathbb E_{p(\theta|\mathcal{D})}[H[P(y|x^\star,\theta)]]}_{Expected \:Data \:Uncertainty}

Where:

  • yy is the predicted output
  • θ\theta is the model parameter
  • xx^* is the test data
  • DD is the training dataset
  • HH is the entropy function

The paper discusses various ensemble methods such as deep ensembles, deep ensemble Bayesian, and UQ in traditional machine learning domain using ensemble techniques, providing an overview of their strengths and weaknesses.

The paper includes a comprehensive review of recent articles published in quantifying uncertainty in AI (ML and DL) using different approaches. The review identifies main research gaps of UQ methods and pointing out solid future directions for researchers in this domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Moloud Abdar (17 papers)
  2. Farhad Pourpanah (14 papers)
  3. Sadiq Hussain (16 papers)
  4. Dana Rezazadegan (4 papers)
  5. Li Liu (311 papers)
  6. Mohammad Ghavamzadeh (97 papers)
  7. Paul Fieguth (36 papers)
  8. Xiaochun Cao (177 papers)
  9. Abbas Khosravi (43 papers)
  10. Vladimir Makarenkov (13 papers)
  11. Saeid Nahavandi (61 papers)
  12. U Rajendra Acharya (15 papers)
Citations (1,668)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com