Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revealing Vision-Language Integration in the Brain with Multimodal Networks (2406.14481v1)

Published 20 Jun 2024 in cs.LG, cs.AI, cs.NE, and q-bio.NC

Abstract: We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoencephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-LLM predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models. Our target DNN models span different architectures (e.g., convolutional networks and transformers) and multimodal training techniques (e.g., cross-attention and contrastive learning). As a key enabling step, we first demonstrate that trained vision and LLMs systematically outperform their randomly initialized counterparts in their ability to predict SEEG signals. We then compare unimodal and multimodal models against one another. Because our target DNN models often have different architectures, number of parameters, and training sets (possibly obscuring those differences attributable to integration), we carry out a controlled comparison of two models (SLIP and SimCLR), which keep all of these attributes the same aside from input modality. Using this approach, we identify a sizable number of neural sites (on average 141 out of 1090 total sites or 12.94%) and brain regions where multimodal integration seems to occur. Additionally, we find that among the variants of multimodal training techniques we assess, CLIP-style training is the best suited for downstream prediction of the neural activity in these sites.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Vighnesh Subramaniam (6 papers)
  2. Colin Conwell (6 papers)
  3. Christopher Wang (9 papers)
  4. Gabriel Kreiman (45 papers)
  5. Boris Katz (32 papers)
  6. Ignacio Cases (11 papers)
  7. Andrei Barbu (35 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com