Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generalising realisability in statistical learning theory under epistemic uncertainty

Published 22 Feb 2024 in cs.LG, cs.AI, math.ST, and stat.TH | (2402.14759v1)

Abstract: The purpose of this paper is to look into how central notions in statistical learning theory, such as realisability, generalise under the assumption that train and test distribution are issued from the same credal set, i.e., a convex set of probability distributions. This can be considered as a first step towards a more general treatment of statistical learning under epistemic uncertainty.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Arjovsky, M.: Out of distribution generalization in machine learning. arXiv preprint arXiv:2103.02667 (2021)
  2. Bentkus, V.: On hoeffding’s inequalities (2004)
  3. Combes, R.: An extension of mcdiarmid’s inequality. arXiv preprint arXiv:1511.05240 (2015)
  4. Cozman, F.G.: Concentration inequalities and laws of large numbers under epistemic irrelevance. arXiv preprint arXiv:0810.2821 (2008)
  5. Cuzzolin, F.: Geometry of upper probabilities. In: ISIPTA. pp. 188–203 (2003)
  6. Cuzzolin, F.: Visions of a generalized probability theory. arXiv preprint arXiv:1810.10341 (2018)
  7. Shadrin, A.: Twelve proofs of the markov inequality. Approximation theory: a volume dedicated to Borislav Bojanov pp. 233–298 (2004)
  8. Vapnik, V.: The nature of statistical learning theory. Springer science & business media (2013)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.