Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised language learning from raw audio: Lessons from the Zero Resource Speech Challenge (2210.15759v1)

Published 27 Oct 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Recent progress in self-supervised or unsupervised machine learning has opened the possibility of building a full speech processing system from raw audio without using any textual representations or expert labels such as phonemes, dictionaries or parse trees. The contribution of the Zero Resource Speech Challenge series since 2015 has been to break down this long-term objective into four well-defined tasks -- Acoustic Unit Discovery, Spoken Term Discovery, Discrete Resynthesis, and Spoken LLMing -- and introduce associated metrics and benchmarks enabling model comparison and cumulative progress. We present an overview of the six editions of this challenge series since 2015, discuss the lessons learned, and outline the areas which need more work or give puzzling results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ewan Dunbar (22 papers)
  2. Nicolas Hamilakis (3 papers)
  3. Emmanuel Dupoux (81 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.