Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis (2204.03040v2)

Published 6 Apr 2022 in cs.SD, cs.CL, cs.LG, and eess.AS

Abstract: In this work, we present the SOMOS dataset, the first large-scale mean opinion scores (MOS) dataset consisting of solely neural text-to-speech (TTS) samples. It can be employed to train automatic MOS prediction systems focused on the assessment of modern synthesizers, and can stimulate advancements in acoustic model evaluation. It consists of 20K synthetic utterances of the LJ Speech voice, a public domain speech dataset which is a common benchmark for building neural acoustic models and vocoders. Utterances are generated from 200 TTS systems including vanilla neural acoustic models as well as models which allow prosodic variations. An LPCNet vocoder is used for all systems, so that the samples' variation depends only on the acoustic models. The synthesized utterances provide balanced and adequate domain and length coverage. We collect MOS naturalness evaluations on 3 English Amazon Mechanical Turk locales and share practices leading to reliable crowdsourced annotations for this task. We provide baseline results of state-of-the-art MOS prediction models on the SOMOS dataset and show the limitations that such models face when assigned to evaluate TTS utterances.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Georgia Maniati (10 papers)
  2. Alexandra Vioni (9 papers)
  3. Nikolaos Ellinas (23 papers)
  4. Karolos Nikitaras (5 papers)
  5. Konstantinos Klapsas (6 papers)
  6. June Sig Sung (16 papers)
  7. Gunu Jho (9 papers)
  8. Aimilios Chalamandaris (17 papers)
  9. Pirros Tsiakoulis (17 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.