Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Multivariate Tuning with Neuron-Level and Population-Level Energy Constraints (1911.12656v1)

Published 28 Nov 2019 in q-bio.NC

Abstract: Optimality principles have been useful in explaining many aspects of biological systems. In the context of neural encoding in sensory areas, optimality is naturally formulated in a Bayesian setting, as neural tuning which minimizes mean decoding error. Many works optimize Fisher information, which approximates the Minimum Mean Square Error (MMSE) of the optimal decoder for long encoding time, but may be misleading for short encoding times. We study MMSE-optimal neural encoding of a multivariate stimulus by uniform populations of spiking neurons, under firing rate constraints for each neuron as well as for the entire population. We show that the population-level constraint is essential for the formulation of a well-posed problem having finite optimal tuning widths, and optimal tuning aligns with the principal components of the prior distribution. Numerical evaluation of the two-dimensional case shows that encoding only the dimension with higher variance is optimal for short encoding times. We also compare direct MMSE optimization to optimization of several proxies to MMSE, namely Fisher information, Maximum Likelihood estimation error, and the Bayesian Cram\'er-Rao bound. We find that optimization of these measures yield qualitatively misleading results regarding MMSE-optimal tuning and its dependence on encoding time and energy constraints.

Summary

We haven't generated a summary for this paper yet.