Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RBUE: A ReLU-Based Uncertainty Estimation Method of Deep Neural Networks (2107.07197v2)

Published 15 Jul 2021 in cs.LG

Abstract: Deep neural networks (DNNs) have successfully learned useful data representations in various tasks. However, assessing the reliability of these representations remains a challenge. Deep Ensemble is widely considered the state-of-the-art method which can estimate the uncertainty with higher quality, but it is very expensive to train and test. MC-Dropout is another popular method, which is less expensive but lacks the diversity of predictions. To estimate the uncertainty with higher quality in less time, we introduce a ReLU-Based Uncertainty Estimation (RBUE) method. Instead of randomly dropping some neurons of the network as in MC-Dropout or using the randomness of the initial weights of networks as in Deep Ensemble, RBUE adds randomness to the activation function module, making the outputs diverse. Under the method, we propose two strategies, MC-DropReLU and MC-RReLU, to estimate uncertainty. We analyze and compare the output diversity of MC-Dropout and our method from the variance perspective and obtain the relationship between the hyperparameters and predictive diversity in the two methods. Moreover, our method is simple to implement and does not need to modify the existing model. We experimentally validate the RBUE on three widely used datasets, CIFAR10, CIFAR100, and TinyImageNet. The experiments demonstrate that our method has competitive performance but is more favorable in training time and memory requirements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yufeng Xia (1 paper)
  2. Jun Zhang (1008 papers)
  3. Zhiqiang Gong (28 papers)
  4. Tingsong Jiang (24 papers)
  5. Wen Yao (61 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.