Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Describing Semantic Representations of Brain Activity Evoked by Visual Stimuli (1802.02210v1)

Published 19 Jan 2018 in cs.CV

Abstract: Quantitative modeling of human brain activity based on language representations has been actively studied in systems neuroscience. However, previous studies examined word-level representation, and little is known about whether we could recover structured sentences from brain activity. This study attempts to generate natural language descriptions of semantic contents from human brain activity evoked by visual stimuli. To effectively use a small amount of available brain activity data, our proposed method employs a pre-trained image-captioning network model using a deep learning framework. To apply brain activity to the image-captioning network, we train regression models that learn the relationship between brain activity and deep-layer image features. The results demonstrate that the proposed model can decode brain activity and generate descriptions using natural language sentences. We also conducted several experiments with data from different subsets of brain regions known to process visual stimuli. The results suggest that semantic information for sentence generations is widespread across the entire cortex.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Eri Matsuo (1 paper)
  2. Ichiro Kobayashi (15 papers)
  3. Shinji Nishimoto (9 papers)
  4. Satoshi Nishida (3 papers)
  5. Hideki Asoh (5 papers)
Citations (12)