Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sampling-based Bayesian Inference with gradient uncertainty (1812.03285v2)

Published 8 Dec 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Deep neural networks(NNs) have achieved impressive performance, often exceed human performance on many computer vision tasks. However, one of the most challenging issues that still remains is that NNs are overconfident in their predictions, which can be very harmful when this arises in safety critical applications. In this paper, we show that predictive uncertainty can be efficiently estimated when we incorporate the concept of gradients uncertainty into posterior sampling. The proposed method is tested on two different datasets, MNIST for in-distribution confusing examples and notMNIST for out-of-distribution data. We show that our method is able to efficiently represent predictive uncertainty on both datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chanwoo Park (24 papers)
  2. Jae Myung Kim (14 papers)
  3. Seok Hyeon Ha (1 paper)
  4. Jungwoo Lee (39 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.