Papers
Topics
Authors
Recent
Search
2000 character limit reached

Equivalences between learning of data and probability distributions, and their applications

Published 5 Jan 2018 in math.LO, cs.IT, and math.IT | (1801.02566v5)

Abstract: Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by [Vitanyi and Chater, 2017] and [Bienvenu et al., 2014] has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, learnability of a subclass of probability measures is equivalent to learnability of the class of the corresponding real parameters. This equivalence allows to transfer results from classical algorithmic theory to learning theory of probability measures. We present a number of such applications, providing many new results regarding EX and BC learnability of classes of measures, thus drawing parallels between the two learning theories.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.