Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HAM-TTS: Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling (2403.05989v1)

Published 9 Mar 2024 in cs.SD and eess.AS
HAM-TTS: Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling

Abstract: Token-based text-to-speech (TTS) models have emerged as a promising avenue for generating natural and realistic speech, yet they grapple with low pronunciation accuracy, speaking style and timbre inconsistency, and a substantial need for diverse training data. In response, we introduce a novel hierarchical acoustic modeling approach complemented by a tailored data augmentation strategy and train it on the combination of real and synthetic data, scaling the data size up to 650k hours, leading to the zero-shot TTS model with 0.8B parameters. Specifically, our method incorporates a latent variable sequence containing supplementary acoustic information based on refined self-supervised learning (SSL) discrete units into the TTS model by a predictor. This significantly mitigates pronunciation errors and style mutations in synthesized speech. During training, we strategically replace and duplicate segments of the data to enhance timbre uniformity. Moreover, a pretrained few-shot voice conversion model is utilized to generate a plethora of voices with identical content yet varied timbres. This facilitates the explicit learning of utterance-level one-to-many mappings, enriching speech diversity and also ensuring consistency in timbre. Comparative experiments (Demo page: https://anonymous.4open.science/w/ham-tts/)demonstrate our model's superiority over VALL-E in pronunciation precision and maintaining speaking style, as well as timbre continuity.

Hierarchical Acoustic Modeling for Enhanced Text-to-Speech Synthesis

Introduction

The quest to enhance the quality and realism of synthesized speech has led to numerous advancements in text-to-speech (TTS) technologies. Among these, token-based TTS models hold significant promise for producing high-quality, natural speech. However, challenges such as low pronunciation accuracy, inconsistencies in speaking style and timbre, and the demand for extensive and diverse training data persist. Addressing these, we explore a novel approach titled Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling (HAM-TTS). This method introduces a refined framework and data augmentation strategy, scaled up to 650k hours of combined real and synthetic data, significantly improving pronunciation accuracy and maintaining style and timbre consistency.

Hierarchical Acoustic Modeling in HAM-TTS

The hierarchical acoustic modeling (HAM) method at the core of HAM-TTS integrates a Latent Variable Sequence (LVS) containing supplementary acoustic information derived from refined self-supervised learning discrete units. This integration is accomplished by employing a predictor within the TTS model. The primary advancements brought forth by HAM include:

  • Pronunciation Accuracy: The incorporation of LVS significantly diminishes pronunciation errors by providing crucial acoustic cues.
  • Speaking Style Consistency: The use of K-Means clustering to refine HuBERT features removes personalized information, ensuring the speaking style is consistent with the audio prompt.
  • Timbre Consistency: The novel data augmentation strategy developed for HAM-TTS aids in enhancing the uniformity of timbre across synthesized speech.

Training with Real and Synthetic Data

An innovative aspect of HAM-TTS is its approach to training with a blend of real and synthetic data. This combination not only enriches the diversity of speech samples but also improves the model’s ability to maintain timbre consistency and style. Utilizing a pretrained few-shot voice conversion model facilitates the creation of a plethora of voices with identical content but varied timbres, thereby providing the model with a richer diversity of training samples.

Experimental Evaluation

Extensive comparative experiments were conducted to evaluate the performance of HAM-TTS against VALL-E, a state-of-the-art baseline model. Notably, HAM-TTS demonstrated superior pronunciation precision and maintained speaking style as well as timbre continuity in various zero-shot scenarios. The results underscore the efficacy of hierarchical acoustic modeling, refined feature processing through K-Means clustering, and the strategic use of synthetic data in enhancing TTS synthesis quality.

Conclusion and Future Developments

The introduction of HAM-TTS and its hierarchical acoustic modeling approach marks a significant step forward in the field of text-to-speech synthesis. The successful integration of supplementary acoustic information via LVS and the advanced data augmentation strategies significantly reduce pronunciation errors while ensuring consistent speaking style and timbre quality. Looking ahead, future research could explore the optimal amalgamation of synthetic data in relation to speaker diversity and speech duration per speaker. Additionally, enhancing the model’s inference speed could further bolster its applicability in real-time scenarios.

Ethical Considerations and Limitations

While HAM-TTS advances the capabilities of text-to-speech systems, it also necessitates consideration of ethical implications, particularly in terms of potential misuse and privacy concerns. The generation of synthetic training data, though innovative, introduces considerations about the authenticity and consent in voice mimicking. Moreover, the scalability and practical applications of HAM-TTS call for ongoing assessment to mitigate potential biases and promote equitable technology development.

In summary, HAM-TTS embodies a robust methodology for text-to-speech synthesis, fostering advancements that could transform interactive technologies and digital communication platforms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. The K-Means Algorithm: A Comprehensive Survey And Performance Evaluation. Electronics, 9(8):1295.
  2. AudioLM: A Language Modeling Approach to Audio Generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:2523–2533.
  3. Language Models Are Few-Shot Learners. Advances in neural information processing systems, 33:1877–1901.
  4. AISHELL-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline. In 2017 20th conference of the oriental chapter of the international coordinating committee on speech databases and speech I/O systems and assessment (O-COCOSDA), pages 1–5. IEEE.
  5. High Fidelity Neural Audio Compression. arXiv preprint arXiv:2210.13438.
  6. NICE: Non-linear Independent Components Estimation.
  7. Generative Adversarial Networks. Advances in neural information processing systems, 27.
  8. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778.
  9. Denoising Diffusion Probabilistic Models. Advances in neural information processing systems, 33:6840–6851.
  10. Hubert: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460.
  11. UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation. In Interspeech.
  12. Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts. arXiv preprint arXiv:2307.07218.
  13. Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search. Advances in Neural Information Processing Systems, 33:8067–8077.
  14. Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech. In International Conference on Machine Learning, pages 5530–5540. PMLR.
  15. Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR).
  16. Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014.
  17. HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis. Advances in Neural Information Processing Systems, 33:17022–17033.
  18. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In 9th International Conference on Learning Representations, ICLR 2021.
  19. Multi-Language Multi-Speaker Acoustic Modeling for LSTM-RNN Based Statistical Parametric Speech Synthesis. In Interspeech.
  20. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In Proceedings of the 40th International Conference on Machine Learning (ICML) 2023. JMLR.
  21. Neural Speech Synthesis with Transformer Network. Proceedings of the AAAI Conference on Artificial Intelligence, page 6706–6713.
  22. Ilya Loshchilov and Frank Hutter. 2017. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations.
  23. XiaoiceSing: A High-Quality and Integrated Singing Voice Synthesis System. In Interspeech.
  24. Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech. In International Conference on Machine Learning, pages 8599–8608. PMLR.
  25. Robust Speech Recognition via Large-Scale Weak Supervision. In Proceedings of the 40th International Conference on Machine Learning. JMLR.org.
  26. Language Models Are Unsupervised Multitask Learners. OpenAI blog, 1(8):9.
  27. FastSpeech 2: Fast and High-Quality End-to-End Text to Speech. In 9th International Conference on Learning Representations, ICLR 2021.
  28. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer.
  29. NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers. arXiv preprint arXiv:2304.09116.
  30. Ella-v: Stable neural codec language modeling with alignment-guided sequence reordering. arXiv preprint arXiv:2401.07333.
  31. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res., 15:1929–1958.
  32. A Survey on Neural Speech Synthesis. arXiv preprint arXiv:2106.15561.
  33. Speech Synthesis Based on Hidden Markov Models. Proceedings of the IEEE, 101(5):1234–1252.
  34. LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971.
  35. LLaMA 2: Open Foundation and Fine-Tuned Chat Models. arXiv preprint arXiv:2307.09288.
  36. WaveNet: A Generative Model for Raw Audio. In The 9th ISCA Speech Synthesis Workshop, page 125.
  37. Neural Discrete Representation Learning. Advances in neural information processing systems, 30.
  38. Attention Is All You Need. Advances in neural information processing systems, 30.
  39. Audiobox: Unified Audio Generation with Natural Language Prompts. arXiv preprint arXiv:2312.15821.
  40. Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers. arXiv preprint arXiv:2301.02111.
  41. HiFi-WaveGAN: Generative Adversarial Network with Auxiliary Spectrogram-Phase Loss for High-Fidelity Singing Voice Generation. arXiv preprint arXiv:2210.12740.
  42. Xiaoicesing 2: A High-Fidelity Singing Voice Synthesizer Based on Generative Adversarial Network. In Proc. Interspeech 2023, pages 5401–5405.
  43. LauraGPT: Listen, attend, understand, and regenerate audio with GPT. arXiv preprint arXiv:2310.04673.
  44. Crosssinger: A Cross-Lingual Multi-Singer High-Fidelity Singing Voice Synthesizer Trained on Monolingual Singers. In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1–6. IEEE.
  45. Tacotron: Towards End-to-End Speech Synthesis. In Interspeech.
  46. ESPnet: End-to-end speech processing toolkit. In Proceedings of Interspeech, pages 2207–2211.
  47. InstructTTS: Modelling Expressive TTS in Discrete Latent Space with Natural Language Style Prompt. arXiv preprint arXiv:2301.13662.
  48. HiFi-Codec: Group-residual Vector quantization for High Fidelity Audio Codec. arXiv preprint arXiv:2305.02765.
  49. Soundstream: An End-to-End Neural Audio Codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495–507.
  50. Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. Advances in Neural Information Processing Systems, 32.
  51. SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities. arXiv preprint arXiv:2305.11000.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Chunhui Wang (16 papers)
  2. Chang Zeng (18 papers)
  3. Bowen Zhang (161 papers)
  4. Ziyang Ma (73 papers)
  5. Yefan Zhu (1 paper)
  6. Zifeng Cai (2 papers)
  7. Jian Zhao (218 papers)
  8. Zhonglin Jiang (11 papers)
  9. Yong Chen (299 papers)
Citations (4)