Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ensemble of Task-Specific Language Models for Brain Encoding (2310.15720v2)

Published 24 Oct 2023 in cs.CL and cs.NE

Abstract: LLMs have been shown to be rich enough to encode fMRI activations of certain Regions of Interest in our Brains. Previous works have explored transfer learning from representations learned for popular natural language processing tasks for predicting brain responses. In our work, we improve the performance of such encoders by creating an ensemble model out of 10 popular LLMs (2 syntactic and 8 semantic). We beat the current baselines by 10% on average across all ROIs through our ensembling methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Arvindh Arun (7 papers)
  2. Jerrin John (1 paper)
  3. Sanjai Kumaran (1 paper)