Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characteristics of Monte Carlo Dropout in Wide Neural Networks (2007.05434v1)

Published 10 Jul 2020 in cs.LG, math.ST, stat.ML, and stat.TH

Abstract: Monte Carlo (MC) dropout is one of the state-of-the-art approaches for uncertainty estimation in neural networks (NNs). It has been interpreted as approximately performing Bayesian inference. Based on previous work on the approximation of Gaussian processes by wide and deep neural networks with random weights, we study the limiting distribution of wide untrained NNs under dropout more rigorously and prove that they as well converge to Gaussian processes for fixed sets of weights and biases. We sketch an argument that this property might also hold for infinitely wide feed-forward networks that are trained with (full-batch) gradient descent. The theory is contrasted by an empirical analysis in which we find correlations and non-Gaussian behaviour for the pre-activations of finite width NNs. We therefore investigate how (strongly) correlated pre-activations can induce non-Gaussian behavior in NNs with strongly correlated weights.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joachim Sicking (15 papers)
  2. Maram Akila (22 papers)
  3. Tim Wirtz (23 papers)
  4. Sebastian Houben (21 papers)
  5. Asja Fischer (63 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.