Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating computational models of infant phonetic learning across languages (2008.02888v1)

Published 6 Aug 2020 in cs.CL, cs.SD, and eess.AS

Abstract: In the first year of life, infants' speech perception becomes attuned to the sounds of their native language. Many accounts of this early phonetic learning exist, but computational models predicting the attunement patterns observed in infants from the speech input they hear have been lacking. A recent study presented the first such model, drawing on algorithms proposed for unsupervised learning from naturalistic speech, and tested it on a single phone contrast. Here we study five such algorithms, selected for their potential cognitive relevance. We simulate phonetic learning with each algorithm and perform tests on three phone contrasts from different languages, comparing the results to infants' discrimination patterns. The five models display varying degrees of agreement with empirical observations, showing that our approach can help decide between candidate mechanisms for early phonetic learning, and providing insight into which aspects of the models are critical for capturing infants' perceptual development.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yevgen Matusevych (12 papers)
  2. Thomas Schatz (5 papers)
  3. Herman Kamper (80 papers)
  4. Naomi H. Feldman (3 papers)
  5. Sharon Goldwater (40 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.