Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation (2203.06102v2)

Published 11 Mar 2022 in cs.LG and stat.ML

Abstract: Uncertainty quantification has received increasing attention in machine learning in the recent past. In particular, a distinction between aleatoric and epistemic uncertainty has been found useful in this regard. The latter refers to the learner's (lack of) knowledge and appears to be especially difficult to measure and quantify. In this paper, we analyse a recent proposal based on the idea of a second-order learner, which yields predictions in the form of distributions over probability distributions. While standard (first-order) learners can be trained to predict accurate probabilities, namely by minimising suitable loss functions on sample data, we show that loss minimisation does not work for second-order predictors: The loss functions proposed for inducing such predictors do not incentivise the learner to represent its epistemic uncertainty in a faithful way.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Viktor Bengs (23 papers)
  2. Eyke Hüllermeier (129 papers)
  3. Willem Waegeman (30 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.