Understanding the computation of time using neural network models (1910.05546v4)
Abstract: To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How the animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multi-seconds in working memory? How temporal information is processed concurrently with spatial information and decision making? Why there are strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and non-temporal information are coded in subspaces orthogonal with each other, and the state trajectories with time at different non-temporal information are quasi-parallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and non-temporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of non-temporal information are similar or not. We identified four factors that facilitate strong temporal signals in non-timing tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and is supported by and gives predictions to a number of experimental phenomena.