Encoding of Lexical Tone in Self-Supervised Models of Spoken Language
Introduction
Recent advancements in self-supervised Spoken LLMs (SLMs) have demonstrated these models' ability to encode a rich variety of linguistic information across different levels of human speech without requiring labeled data. However, much of this research has concentrated on segmental features such as phonemes, with less attention given to how SLMs encode suprasegmental phonology like tone and stress patterns. This paper focuses on lexical tone, a vital suprasegmental feature present in over half of the world's languages, using Mandarin and Vietnamese as case studies. The paper investigates the extent to which SLMs encode lexical tone, the impact of supervised fine-tuning on this process, and whether SLMs exhibit perceptual patterns similar to those of native and non-native human listeners.
Tone in Language
Lexical tone significantly influences meaning in many languages, employing pitch cues (e.g., fundamental frequency or F0 contours) and sometimes other cues like voice quality or amplitude. This paper emphasizes pitch cues while acknowledging the role of other cues in tone perception. The paper focuses on Mandarin and Vietnamese due to their extensive use of tonal contrast, with Mandarin employing four primary tones and Vietnamese utilizing up to eight. Understanding how SLMs encode such tonal information is crucial for improving speech recognition and synthesis systems in tonal languages.
Related Work
The analysis of SLMs, particularly those based on transformer architectures, has been gaining traction. These models are shown to encode a variety of linguistic information. However, there is limited research on their treatment of suprasegmental features like tone. Additionally, studies in psycholinguistics and language development have explored how humans perceive and process such features, offering valuable insights for interpreting SLM behavior. Studies on automatic classification of tones in Mandarin, for instance, have achieved significant accuracy using deep learning models, suggesting the potential for SLMs to effectively handle tone classification tasks.
Methodology
This research employs several pre-trained wav2vec2-based models on languages with and without tonal characteristics to assess their capability in encoding tonal information. By employing a linear probing approach on the hidden state activations of these models for Mandarin and Vietnamese test data, the paper dissects the degree of tone encoding in different layers of SLMs. It also evaluates the impact of supervised fine-tuning targeted at Automatic Speech Recognition (ASR) on the models' tonal encoding capacities.
Results
The results indicate that:
- SLMs are adept at encoding tonal information regardless of their training on tonal or non-tonal languages. Models trained on tonal languages generally offer higher tone classification accuracy, particularly in higher layers.
- Supervised fine-tuning for ASR enhances the tone encoding capabilities of models trained on tonal languages but reduces it for models trained on non-tonal languages. This suggests that fine-tuning encourages models to specialize in language-specific information essential for transcribing speech into text.
- While SLMs quickly surpass baseline methods in tone and consonant classification accuracy during pre-training, they do not exhibit a differential learning trajectory akin to that of human language acquisition with regards to suprasegmental and segmental features.
- SLMs display perceptual patterns similar to those of human listeners in tone and consonant perception experiments, aligning especially closely with the challenges seen in non-native listener perceptions.
Conclusion
The paper elucidates the robustness of self-supervised spoken LLMs in encoding lexical tone information, demonstrating their potential in handling tonal languages effectively. It highlights the influence of supervised fine-tuning in modulating these models' focus towards language-specific features critical in ASR tasks. While the learning trajectories of SLMs do not fully mimic those observed in human language development, the models' perceptual patterns in tone and consonant perception show intriguing parallels with human listeners. These findings pave the way for future research into the encoding of suprasegmental features across a broader array of languages and suggest the importance of integrating both tonal and non-tonal language data in training SLMs to enhance their linguistic versatility.