Papers
Topics
Authors
Recent
2000 character limit reached

Language learnability in the limit for general metrics: a Gold-Angluin result

Published 24 Mar 2021 in cs.CL and cs.FL | (2103.13166v1)

Abstract: In his pioneering work in the field of Inductive Inference, Gold (1967) proved that a set containing all finite languages and at least one infinite language over the same fixed alphabet is not learnable in the exact sense. Within the same framework, Angluin (1980) provided a complete characterization for the learnability of language families. Mathematically, the concept of exact learning in that classical setting can be seen as the use of a particular type of metric for learning in the limit. In this short research note we use Niyogi's extended version of a theorem by Blum and Blum (1975) on the existence of locking data sets to prove a necessary condition for learnability in the limit of any family of languages in any given metric. This recovers Gold's theorem as a special case. Moreover, when the language family is further assumed to contain all finite languages, the same condition also becomes sufficient for learnability in the limit.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.