Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language learnability in the limit for general metrics: a Gold-Angluin result (2103.13166v1)

Published 24 Mar 2021 in cs.CL and cs.FL

Abstract: In his pioneering work in the field of Inductive Inference, Gold (1967) proved that a set containing all finite languages and at least one infinite language over the same fixed alphabet is not learnable in the exact sense. Within the same framework, Angluin (1980) provided a complete characterization for the learnability of language families. Mathematically, the concept of exact learning in that classical setting can be seen as the use of a particular type of metric for learning in the limit. In this short research note we use Niyogi's extended version of a theorem by Blum and Blum (1975) on the existence of locking data sets to prove a necessary condition for learnability in the limit of any family of languages in any given metric. This recovers Gold's theorem as a special case. Moreover, when the language family is further assumed to contain all finite languages, the same condition also becomes sufficient for learnability in the limit.

Summary

We haven't generated a summary for this paper yet.