Interpretable Embeddings of Speech Enhance and Explain Brain Encoding Performance of Audio Models (2507.16080v1)
Abstract: Self-supervised speech models (SSMs) are increasingly hailed as more powerful computational models of human speech perception than models based on traditional hand-crafted features. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic Question-Answering features, and contextualized embeddings from three state-of-the-art SSMs (Whisper, HuBERT, WavLM), quantifying the shared and unique neural variance captured by each feature class. Contrary to prevailing assumptions, our interpretable model predicted electrocorticography (ECoG) responses to speech more accurately than any SSM. Moreover, augmenting SSM representations with interpretable features yielded the best overall neural predictions, significantly outperforming either class alone. Further variance-partitioning analyses revealed previously unresolved components of SSM representations that contribute to their neural alignment: 1. Despite the common assumption that later layers of SSMs discard low-level acoustic information, these models compress and preferentially retain frequency bands critical for neural encoding of speech (100-1000 Hz). 2. Contrary to previous claims, SSMs encode brain-relevant semantic information that cannot be reduced to lower-level features, improving with context length and model size. These results highlight the importance of using refined, interpretable features in understanding speech perception.