Emergent Mind

Encoding of lexical tone in self-supervised models of spoken language

(2403.16865)
Published Mar 25, 2024 in cs.CL and eess.AS

Abstract

Interpretability research has shown that self-supervised Spoken Language Models (SLMs) encode a wide variety of features in human speech from the acoustic, phonetic, phonological, syntactic and semantic levels, to speaker characteristics. The bulk of prior research on representations of phonology has focused on segmental features such as phonemes; the encoding of suprasegmental phonology (such as tone and stress patterns) in SLMs is not yet well understood. Tone is a suprasegmental feature that is present in more than half of the world's languages. This paper aims to analyze the tone encoding capabilities of SLMs, using Mandarin and Vietnamese as case studies. We show that SLMs encode lexical tone to a significant degree even when they are trained on data from non-tonal languages. We further find that SLMs behave similarly to native and non-native human participants in tone and consonant perception studies, but they do not follow the same developmental trajectory.

Accuracy of Mandarin tones classified using models trained on both tonal and non-tonal languages.

Overview

  • The study examines how self-supervised Spoken Language Models (SLMs) encode lexical tone, using Mandarin and Vietnamese as examples.

  • It analyzes the impact of supervised fine-tuning on SLMs' ability to encode tone and compares their perceptual patterns to those of human listeners.

  • Findings suggest SLMs are capable of encoding tonal information well, with fine-tuning improving tone handling in tonal languages but not in non-tonal ones.

  • SLMs' perceptual patterns in tone perception closely align with those of human listeners, showing potential for improved speech recognition in tonal languages.

Encoding of Lexical Tone in Self-Supervised Models of Spoken Language

Introduction

Recent advancements in self-supervised Spoken Language Models (SLMs) have demonstrated these models' ability to encode a rich variety of linguistic information across different levels of human speech without requiring labeled data. However, much of this research has concentrated on segmental features such as phonemes, with less attention given to how SLMs encode suprasegmental phonology like tone and stress patterns. This study focuses on lexical tone, a vital suprasegmental feature present in over half of the world's languages, using Mandarin and Vietnamese as case studies. The paper investigates the extent to which SLMs encode lexical tone, the impact of supervised fine-tuning on this process, and whether SLMs exhibit perceptual patterns similar to those of native and non-native human listeners.

Tone in Language

Lexical tone significantly influences meaning in many languages, employing pitch cues (e.g., fundamental frequency or F0 contours) and sometimes other cues like voice quality or amplitude. This paper emphasizes pitch cues while acknowledging the role of other cues in tone perception. The study focuses on Mandarin and Vietnamese due to their extensive use of tonal contrast, with Mandarin employing four primary tones and Vietnamese utilizing up to eight. Understanding how SLMs encode such tonal information is crucial for improving speech recognition and synthesis systems in tonal languages.

Related Work

The analysis of SLMs, particularly those based on transformer architectures, has been gaining traction. These models are shown to encode a variety of linguistic information. However, there is limited research on their treatment of suprasegmental features like tone. Additionally, studies in psycholinguistics and language development have explored how humans perceive and process such features, offering valuable insights for interpreting SLM behavior. Studies on automatic classification of tones in Mandarin, for instance, have achieved significant accuracy using deep learning models, suggesting the potential for SLMs to effectively handle tone classification tasks.

Methodology

This research employs several pre-trained wav2vec2-based models on languages with and without tonal characteristics to assess their capability in encoding tonal information. By employing a linear probing approach on the hidden state activations of these models for Mandarin and Vietnamese test data, the study dissects the degree of tone encoding in different layers of SLMs. It also evaluates the impact of supervised fine-tuning targeted at Automatic Speech Recognition (ASR) on the models' tonal encoding capacities.

Results

The results indicate that:

  • SLMs are adept at encoding tonal information regardless of their training on tonal or non-tonal languages. Models trained on tonal languages generally offer higher tone classification accuracy, particularly in higher layers.
  • Supervised fine-tuning for ASR enhances the tone encoding capabilities of models trained on tonal languages but reduces it for models trained on non-tonal languages. This suggests that fine-tuning encourages models to specialize in language-specific information essential for transcribing speech into text.
  • While SLMs quickly surpass baseline methods in tone and consonant classification accuracy during pre-training, they do not exhibit a differential learning trajectory akin to that of human language acquisition with regards to suprasegmental and segmental features.
  • SLMs display perceptual patterns similar to those of human listeners in tone and consonant perception experiments, aligning especially closely with the challenges seen in non-native listener perceptions.

Conclusion

The study elucidates the robustness of self-supervised spoken language models in encoding lexical tone information, demonstrating their potential in handling tonal languages effectively. It highlights the influence of supervised fine-tuning in modulating these models' focus towards language-specific features critical in ASR tasks. While the learning trajectories of SLMs do not fully mimic those observed in human language development, the models' perceptual patterns in tone and consonant perception show intriguing parallels with human listeners. These findings pave the way for future research into the encoding of suprasegmental features across a broader array of languages and suggest the importance of integrating both tonal and non-tonal language data in training SLMs to enhance their linguistic versatility.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

References
  1. How Familiar Does That Sound? Cross-Lingual Representational Similarity Analysis of Acoustic Word Embeddings
  2. Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations.
  3. Neural representations for modeling variation in speech. Journal of Phonetics, 92:101137.
  4. Agnes Belotel-Grenie and Michel Grenie. 1994. Phonation types analysis in standard chinese. In 3rd International Conference on Spoken Language Processing, ICSLP 1994, pages 343–346. The International Society for Computers and Their Applications (ISCA).
  5. Paul Boersma and David Weenink. 2021. Praat: Doing phonetics by computer [Computer program].
  6. Marc Brunelle. 2009. Tone perception in northern and southern vietnamese. Journal of Phonetics, 37(1):79–96.
  7. AISHELL-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline
  8. Yuan Chai. 2019. THE SOURCE OF CREAK IN MANDARIN UTTERANCES.
  9. Matthew Y. Chen. 2000. Tone Sandhi: Patterns across Chinese Dialects. Cambridge Studies in Linguistics. Cambridge University Press, Cambridge.
  10. Computational Modelling of Tone Perception Based on Direct Processing of f0 Contours. Brain Sciences, 12(3):337.
  11. Unsupervised Cross-Lingual Representation Learning for Speech Recognition. In Interspeech 2021, pages 2426–2430. ISCA.
  12. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics.
  13. Introducing Meta-analysis in the Evaluation of Computational Models of Infant Language Development. Cognitive Science, 47(7):e13307.
  14. Probing phoneme, language and speaker information in unsupervised speech representations. In Interspeech.
  15. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  16. Christian T. DiCanio. 2012. Cross-linguistic perception of Itunyoso Trique tone. Journal of Phonetics, 40(5):672–688.
  17. AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale
  18. Yen-Chen Hao. 2012. Second language acquisition of Mandarin Chinese tones by tonal and non-tonal language speakers. Journal of Phonetics, 40(2):269–279.
  19. John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
  20. HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
  21. Ranzo Huang and Brian Mak. 2023. Wav2vec 2.0 ASR for Cantonese-Speaking Older Adults in a Clinical Setting. In INTERSPEECH 2023, pages 4958–4962. ISCA.
  22. Tsan Huang and Keith Johnson. 2011. Language Specificity in Speech Perception: Perception of Mandarin Tones by Native and Nonnative Listeners. Phonetica, 67(4):243–267.
  23. Yaqian Huang. 2020. Different attributes of creaky voice distinctly affect Mandarin tonal perception. The Journal of the Acoustical Society of America, 147(3):1441–1458.
  24. Larry M. Hyman. 2018. What tone teaches us about language. Language, 94(3):698–709.
  25. Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71:1–15.
  26. Sun-Ah Jun and Haruo Kubozono. 2020. Asian Pacific Rim. In Carlos Gussenhoven and Aoju Chen, editors, The Oxford Handbook of Language Prosody, Oxford Handbooks. Oxford University Press, Oxford, New York.
  27. James Kirby. 2008. vPhon: A Vietnamese phonetizer (version 2.1.1).
  28. James P. Kirby. 2011. Vietnamese (hanoi vietnamese). Journal of the international phonetic association, 41(3):381–392.
  29. Jianjing Kuang. 2017. Covariation between voice quality and pitch: Revisiting the case of Mandarin creaky voice. The Journal of the Acoustical Society of America, 142(3):1693–1706.
  30. Statistical learning models of early phonetic acquisition struggle with child-centered audio data
  31. Liquan Liu and René Kager. 2014. Perception of tones by infants learning a non-tone language. Cognition, 133(2):385–394.
  32. Ke-Han Lu and Kuan-Yu Chen. 2022. A context-aware knowledge transferring strategy for CTC-based ASR.
  33. Hieu-Thi Luong and Hai-Quan Vu. 2016. A non-expert Kaldi recipe for Vietnamese speech recognition system. In Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies (WLSI/OIAF4HLT2016), pages 51–55, Osaka, Japan. The COLING 2016 Organizing Committee.
  34. Probing Acoustic Representations for Phonetic Properties. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 311–315.
  35. Ltd. Magic Data Technology Co. 2019. MAGICDATA Mandarin Chinese Read Speech Corpus.
  36. Probing Self-supervised Speech Models for Phonetic and Phonemic Information: A Case Study in Aspiration. In INTERSPEECH 2023, pages 251–255. ISCA.
  37. Montreal Forced Aligner: Trainable Text-Speech Alignment Using Kaldi. In Interspeech 2017, pages 498–502. ISCA.
  38. Librosa/librosa: 0.10.1. Zenodo.
  39. A precursor of language acquisition in young infants. Cognition, 29(2):143–178.
  40. Language discrimination by newborns: Toward an understanding of the role of rhythm. Journal of Experimental Psychology. Human Perception and Performance, 24(3):756–766.
  41. Thai Binh Nguyen. 2021. Vietnamese end-to-end speech recognition using wav2vec 2.0.
  42. Fairseq: A Fast, Extensible Toolkit for Sequence Modeling. pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics.
  43. Librispeech: An ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210.
  44. LeBenchmark 2.0: A Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech.
  45. Layer-wise Analysis of a Self-supervised Speech Representation Model
  46. MLS: A Large-Scale Multilingual Dataset for Speech Research. In Interspeech 2020, pages 2757–2761.
  47. Robust Speech Recognition via Large-Scale Weak Supervision
  48. Going beyond F0: The acquisition of Mandarin tones. Journal of Child Language, 48(2):387–398.
  49. Highly Accurate Mandarin Tone Classification In The Absence of Pitch Information. In Speech Prosody 2014, pages 673–677. ISCA.
  50. Mandarin tone classification without pitch tracking. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4868–4872, Florence, Italy. IEEE.
  51. Vance Schaefer and Isabelle Darcy. 2014. Lexical function of pitch in the first language shapes cross-linguistic perception of Thai tones. Laboratory Phonology, 5(4):489–522.
  52. Wave to Syntax: Probing spoken language models for syntax. In INTERSPEECH 2023, pages 1259–1263.
  53. Perception and Representation of Lexical Tones in Native Mandarin-Learning Infants and Toddlers. Frontiers in Psychology, 8.
  54. Leher Singh and Charlene S. L. Fu. 2016. A New View of Language Development: The Acquisition of Lexical Tone. Child Development, 87(3):834–854.
  55. Spoken word recognition in early childhood: Comparative effects of vowel, consonant and lexical tone variation. Cognition, 142:1–11.
  56. Connie K. So and Catherine T. Best. 2010. Cross-language Perception of Non-native Tonal Contrasts: Effects of Native Phonological and Phonetic Influences. Language and speech, 53(Pt 2):273–293.
  57. Kimiko Tsukada and Mariko Kondo. 2019. The Perception of Mandarin Lexical Tones by Native Speakers of Burmese. Language and Speech, 62(4):625–640.
  58. Attention Is All You Need
  59. Dong Wang and Xuewei Zhang. 2015. THCHS-30 : A Free Chinese Speech Corpus.
  60. Xinchun Wang and Jidong Chen. 2020. The Acquisition of Mandarin Consonants by English Learners: The Relationship between Perception and Production. Languages, 5(2):20.
  61. Using Computational Models to Test Syntactic Learnability. Linguistic Inquiry, pages 1–44.
  62. Mandarin lexical tones: A corpus-based study of word length, syllable position and prosodic position on duration. pages 1908–1912.
  63. Categorical Perception of VOT and Lexical Tones in Chinese and the Developmental Course. Acta Psychologica Sinica, 41:572–579.
  64. When does native language input affect phonetic perception? The precocious case of lexical tone. Journal of memory and language, 68(2):123–139.
  65. Moira Yip. 2002. Tone. Cambridge Textbooks in Linguistics. Cambridge University Press, Cambridge.
  66. Automatic recognition of suprasegmentals in speech
  67. Eric Zee. 1991. Chinese (Hong Kong Cantonese). Journal of the International Phonetic Association, 21(1):46–48.
  68. Phone-to-audio alignment without text: A Semi-supervised Approach. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

Show All 68