Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
109 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
35 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
5 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Interpretable Embeddings of Speech Enhance and Explain Brain Encoding Performance of Audio Models (2507.16080v1)

Published 21 Jul 2025 in q-bio.NC and cs.SD

Abstract: Self-supervised speech models (SSMs) are increasingly hailed as more powerful computational models of human speech perception than models based on traditional hand-crafted features. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic Question-Answering features, and contextualized embeddings from three state-of-the-art SSMs (Whisper, HuBERT, WavLM), quantifying the shared and unique neural variance captured by each feature class. Contrary to prevailing assumptions, our interpretable model predicted electrocorticography (ECoG) responses to speech more accurately than any SSM. Moreover, augmenting SSM representations with interpretable features yielded the best overall neural predictions, significantly outperforming either class alone. Further variance-partitioning analyses revealed previously unresolved components of SSM representations that contribute to their neural alignment: 1. Despite the common assumption that later layers of SSMs discard low-level acoustic information, these models compress and preferentially retain frequency bands critical for neural encoding of speech (100-1000 Hz). 2. Contrary to previous claims, SSMs encode brain-relevant semantic information that cannot be reduced to lower-level features, improving with context length and model size. These results highlight the importance of using refined, interpretable features in understanding speech perception.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com