Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalization Ability of MOS Prediction Networks (2110.02635v3)

Published 6 Oct 2021 in eess.AS

Abstract: Automatic methods to predict listener opinions of synthesized speech remain elusive since listeners, systems being evaluated, characteristics of the speech, and even the instructions given and the rating scale all vary from test to test. While automatic predictors for metrics such as mean opinion score (MOS) can achieve high prediction accuracy on samples from the same test, they typically fail to generalize well to new listening test contexts. In this paper, using a variety of networks for MOS prediction including MOSNet and self-supervised speech models such as wav2vec2, we investigate their performance on data from different listening tests in both zero-shot and fine-tuned settings. We find that wav2vec2 models fine-tuned for MOS prediction have good generalization capability to out-of-domain data even for the most challenging case of utterance-level predictions in the zero-shot setting, and that fine-tuning to in-domain data can improve predictions. We also observe that unseen systems are especially challenging for MOS prediction models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Erica Cooper (46 papers)
  2. Wen-Chin Huang (53 papers)
  3. Tomoki Toda (106 papers)
  4. Junichi Yamagishi (178 papers)
Citations (133)

Summary

Generalization Ability of MOS Prediction Networks

The paper "Generalization ability of MOS prediction networks" presents an in-depth exploration into the complex issue of automatically predicting Mean Opinion Scores (MOS) for synthesized speech. Given the considerable variability and subjective nature of human auditory perception, developing robust automatic MOS prediction systems remains an unsolved problem. The authors focus on investigating the generalization capabilities of various network architectures trained for MOS prediction, highlighting challenges faced when applying these models across diverse listening test contexts.

Key Contributions

The paper employs a rigorous experimental framework using a variety of models such as MOSNet and self-supervised learning frameworks like wav2vec2 to assess their capacity in predicting MOS under different conditions, particularly across out-of-domain data. The researchers approach this by leveraging datasets from diverse listening tests, some of which include new speakers, systems, listeners, and texts, to challenge the generalization ability of MOS predictors.

Experimental Methodology

In the paper, the authors investigate several models trained and fine-tuned on a comprehensive in-domain dataset (BVCC), comprising a variety of existing speech synthesis samples. They further test the models on out-of-domain datasets collected from previous listening tests, each varying in language, sample diversity, and listener demographics. The evaluation metrics employed include mean squared error (MSE), linear correlation coefficient (LCC), Spearman rank correlation coefficient (SRCC), and Kendall Tau rank correlation (KTAU).

Significant Findings

  1. Model Performance: Fine-tuned self-supervised models (wav2vec2 and HuBERT) demonstrated strong performance in the MOS prediction task. Notably, wav2vec2 models exhibited good generalization capabilities and strong correlation metrics even in zero-shot scenarios, with the best results when fine-tuned on in-domain data.
  2. Challenges with Unseen Systems: Unseen systems posed significant challenges across the datasets. For the ASV2019 dataset, where individual utterances often have a single rater resulting in high variability, the complexity of unseen system generalization was further highlighted.
  3. Data Augmentation: The paper reports improvements when augmenting data with speed and silence transformations during model training, particularly for the MOSNet-based architectures.

Implications and Future Directions

The implications of this research are twofold. Practically, it showcases how fine-tuning self-supervised models on smaller, task-specific datasets can yield robust MOS predictions, potentially streamlining the evaluation process for speech synthesis systems. Theoretically, it sets a foundation for further exploratory work into model architectures and datasets that capture the nuances of human auditory perception better.

Moving forward, research could benefit from addressing the inherent difficulty in predicting MOS for unseen systems by examining more sophisticated modeling techniques or leveraging additional linguistic and contextual features. Moreover, exploring better domain adaptation strategies could improve generalization in broader contexts, fostering advancements in AI-driven speech evaluation technologies.

The authors have significantly contributed to the understanding of how MOS prediction networks can be trained for enhanced generalization, laying a groundwork upon which future innovations can be built.

Youtube Logo Streamline Icon: https://streamlinehq.com