Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective (1909.05654v1)

Published 5 Sep 2019 in cs.CV and cs.LG

Abstract: Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for computational intelligent agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Di Fu (20 papers)
  2. Cornelius Weber (51 papers)
  3. Guochun Yang (18 papers)
  4. Matthias Kerzel (33 papers)
  5. Weizhi Nan (1 paper)
  6. Pablo Barros (36 papers)
  7. Haiyan Wu (18 papers)
  8. Xun Liu (39 papers)
  9. Stefan Wermter (157 papers)

Summary

We haven't generated a summary for this paper yet.