The Zero Resource Speech Challenge 2021: Spoken language modelling (2104.14700v2)
Abstract: We present the Zero Resource Speech Challenge 2021, which asks participants to learn a LLM directly from audio, without any text or labels. The challenge is based on the Libri-light dataset, which provides up to 60k hours of audio from English audio books without any associated text. We provide a pipeline baseline system consisting on an encoder based on contrastive predictive coding (CPC), a quantizer ($k$-means) and a standard LLM (BERT or LSTM). The metrics evaluate the learned representations at the acoustic (ABX discrimination), lexical (spot-the-word), syntactic (acceptability judgment) and semantic levels (similarity judgment). We present an overview of the eight submitted systems from four groups and discuss the main results.
- Ewan Dunbar (22 papers)
- Mathieu Bernard (10 papers)
- Nicolas Hamilakis (3 papers)
- Tu Anh Nguyen (12 papers)
- Maureen de Seyssel (11 papers)
- Morgane Rivière (26 papers)
- Eugene Kharitonov (25 papers)
- Emmanuel Dupoux (81 papers)
- Patricia Rozé (2 papers)