Interpretability research has shown that self-supervised Spoken Language Models (SLMs) encode a wide variety of features in human speech from the acoustic, phonetic, phonological, syntactic and semantic levels, to speaker characteristics. The bulk of prior research on representations of phonology has focused on segmental features such as phonemes; the encoding of suprasegmental phonology (such as tone and stress patterns) in SLMs is not yet well understood. Tone is a suprasegmental feature that is present in more than half of the world's languages. This paper aims to analyze the tone encoding capabilities of SLMs, using Mandarin and Vietnamese as case studies. We show that SLMs encode lexical tone to a significant degree even when they are trained on data from non-tonal languages. We further find that SLMs behave similarly to native and non-native human participants in tone and consonant perception studies, but they do not follow the same developmental trajectory.
The study examines how self-supervised Spoken Language Models (SLMs) encode lexical tone, using Mandarin and Vietnamese as examples.
It analyzes the impact of supervised fine-tuning on SLMs' ability to encode tone and compares their perceptual patterns to those of human listeners.
Findings suggest SLMs are capable of encoding tonal information well, with fine-tuning improving tone handling in tonal languages but not in non-tonal ones.
SLMs' perceptual patterns in tone perception closely align with those of human listeners, showing potential for improved speech recognition in tonal languages.
Recent advancements in self-supervised Spoken Language Models (SLMs) have demonstrated these models' ability to encode a rich variety of linguistic information across different levels of human speech without requiring labeled data. However, much of this research has concentrated on segmental features such as phonemes, with less attention given to how SLMs encode suprasegmental phonology like tone and stress patterns. This study focuses on lexical tone, a vital suprasegmental feature present in over half of the world's languages, using Mandarin and Vietnamese as case studies. The paper investigates the extent to which SLMs encode lexical tone, the impact of supervised fine-tuning on this process, and whether SLMs exhibit perceptual patterns similar to those of native and non-native human listeners.
Lexical tone significantly influences meaning in many languages, employing pitch cues (e.g., fundamental frequency or F0 contours) and sometimes other cues like voice quality or amplitude. This paper emphasizes pitch cues while acknowledging the role of other cues in tone perception. The study focuses on Mandarin and Vietnamese due to their extensive use of tonal contrast, with Mandarin employing four primary tones and Vietnamese utilizing up to eight. Understanding how SLMs encode such tonal information is crucial for improving speech recognition and synthesis systems in tonal languages.
The analysis of SLMs, particularly those based on transformer architectures, has been gaining traction. These models are shown to encode a variety of linguistic information. However, there is limited research on their treatment of suprasegmental features like tone. Additionally, studies in psycholinguistics and language development have explored how humans perceive and process such features, offering valuable insights for interpreting SLM behavior. Studies on automatic classification of tones in Mandarin, for instance, have achieved significant accuracy using deep learning models, suggesting the potential for SLMs to effectively handle tone classification tasks.
This research employs several pre-trained wav2vec2-based models on languages with and without tonal characteristics to assess their capability in encoding tonal information. By employing a linear probing approach on the hidden state activations of these models for Mandarin and Vietnamese test data, the study dissects the degree of tone encoding in different layers of SLMs. It also evaluates the impact of supervised fine-tuning targeted at Automatic Speech Recognition (ASR) on the models' tonal encoding capacities.
The results indicate that:
The study elucidates the robustness of self-supervised spoken language models in encoding lexical tone information, demonstrating their potential in handling tonal languages effectively. It highlights the influence of supervised fine-tuning in modulating these models' focus towards language-specific features critical in ASR tasks. While the learning trajectories of SLMs do not fully mimic those observed in human language development, the models' perceptual patterns in tone and consonant perception show intriguing parallels with human listeners. These findings pave the way for future research into the encoding of suprasegmental features across a broader array of languages and suggest the importance of integrating both tonal and non-tonal language data in training SLMs to enhance their linguistic versatility.