Language learnability in the limit for general metrics: a Gold-Angluin result
Abstract: In his pioneering work in the field of Inductive Inference, Gold (1967) proved that a set containing all finite languages and at least one infinite language over the same fixed alphabet is not learnable in the exact sense. Within the same framework, Angluin (1980) provided a complete characterization for the learnability of language families. Mathematically, the concept of exact learning in that classical setting can be seen as the use of a particular type of metric for learning in the limit. In this short research note we use Niyogi's extended version of a theorem by Blum and Blum (1975) on the existence of locking data sets to prove a necessary condition for learnability in the limit of any family of languages in any given metric. This recovers Gold's theorem as a special case. Moreover, when the language family is further assumed to contain all finite languages, the same condition also becomes sufficient for learnability in the limit.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.