Meaningful Modeling of Neural Color Representations in Visual Perception Simulations

Determine how to meaningfully model neural representations of color within computational simulations of visual perception, specifying a principled representation that accurately captures cortical color without presupposing a fixed dimensionality so it can accommodate observers with differing cone mosaics and color dimensionalities.

Background

The paper argues that most computational neuroscience approaches sidestep the problem of how color is represented in the cortex by hard-coding a fixed dimensionality (often three), using tristimulus values such as RGB, LMS, or cone-opponent spaces. Such assumptions are incompatible with modeling observers who may have tetrachromatic vision or otherwise differ from standard trichromats.

To address this, the authors propose representing cortical color as a high-dimensional vector and letting the intrinsic color dimensionality emerge via a self-supervised learning process driven solely by optic nerve signals. They develop quantitative and qualitative tools to measure and visualize the emergent color space, but explicitly note that the foundational question of how to model neural color representations in simulation remains open.

References

It is an open question how to meaningfully model neural representations of color in simulations of visual perception.

A Computational Framework for Modeling Emergence of Color Vision in the Human Brain  (2408.16916 - Kotani et al., 2024) in Related Work, Subsection "Color Representation in Computational Neuroscience"