Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 85 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Kimi K2 186 tok/s Pro
2000 character limit reached

"What" x "When" working memory representations using Laplace Neural Manifolds (2409.20484v1)

Published 30 Sep 2024 in q-bio.NC and cs.NE

Abstract: Working memory $\unicode{x2013}$ the ability to remember recent events as they recede continuously into the past $\unicode{x2013}$ requires the ability to represent any stimulus at any time delay. This property requires neurons coding working memory to show mixed selectivity, with conjunctive receptive fields (RFs) for stimuli and time, forming a representation of 'what' $\times$ 'when'. We study the properties of such a working memory in simple experiments where a single stimulus must be remembered for a short time. The requirement of conjunctive receptive fields allows the covariance matrix of the network to decouple neatly, allowing an understanding of the low-dimensional dynamics of the population. Different choices of temporal basis functions lead to qualitatively different dynamics. We study a specific choice $\unicode{x2013}$ a Laplace space with exponential basis functions for time coupled to an "Inverse Laplace" space with circumscribed basis functions in time. We refer to this choice with basis functions that evenly tile log time as a Laplace Neural Manifold. Despite the fact that they are related to one another by a linear projection, the Laplace population shows a stable stimulus-specific subspace whereas the Inverse Laplace population shows rotational dynamics. The growth of the rank of the covariance matrix with time depends on the density of the temporal basis set; logarithmic tiling shows good agreement with data. We sketch a continuous attractor CANN that constructs a Laplace Neural Manifold. The attractor in the Laplace space appears as an edge; the attractor for the inverse space appears as a bump. This work provides a map for going from more abstract cognitive models of WM to circuit-level implementation using continuous attractor neural networks, and places constraints on the types of neural dynamics that support working memory.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces novel continuous attractor network models employing Laplace and inverse Laplace representations to encode 'what' and 'when' in neural circuits.
  • It presents rigorous analytical derivations demonstrating how edge-shaped and bump-shaped attractors capture temporal dynamics and stimulus identity in 1-D and 2-D models.
  • The study outlines practical implications for advanced AI systems through refined neural circuitry configurations that enhance temporal processing and memory encoding.

Overview of Neural Circuit Models for Temporal Dynamics

The paper "Notes" by Chenyu Wang provides an in-depth exploration of neural circuit models designed to represent time through continuous attractor neural networks (CANNs). The focus is on implementing a Laplace Neural Manifold at the circuit level, utilizing attractors that exhibit distinct spatial characteristics in Laplace and inverse Laplace domains. This paper disseminates the theoretical constructs and practical implementations for encoding temporal information in both 1-D and 2-D neural manifold spaces, with rigorous analytical derivations and plausible neural circuitry configurations.

Neural Circuit Models

The crux of the paper is based on the conceptualization of time representation using CANNs. In a neural manifold, Laplace representations manifest as edge-shaped attractors, while inverse Laplace representations take the form of bump-shaped attractors. The behaviors of these attractors change as time progresses, showcasing a logarithmic displacement with time recurred in the attractors' dynamics.

1-D Line CANNs for Temporal Dynamics

The paper explores creating one-dimensional line CANNs for temporal dynamics. It builds upon previous research by establishing a continuous attractor model for Laplace/inverse representations of a delta function, positing that the time constants form a geometric series. The model emphasizes the translational invariance property of the temporal receptive fields (TRFs), which are analogous to the stationary states of bump-shaped CANNs. The TRFs evolve over time, appearing as if the edge attractor moves with a decaying velocity.

Crucially, a recurrent neural network structure is outlined, with fine-tuning parameters of the recurrent matrix WijW_{ij} to achieve desired stationary edge-shaped states. Additionally, a bump attractor network is introduced to interact dynamically with the edge attractor, stabilizing and controlling its temporal evolution via inputs that manipulate the edge attractor's speed.

2-D CANNs for What x When Representations

Expanding to two-dimensional models, the paper presents a configuration comprising multiple ring attractors forming a cylindrical 2-D manifold. This structure allows for the representation of 'what' and 'when' dimensions concurrently. Neurons in each ring encode stimulus angles, and moving up and down the cylinder adjusts the encoded rate constants for Laplace and inverse Laplace neurons.

Encoding of What

For encoding the 'what' component, each subpopulation represents a one-dimensional continuous stimulus. The network dynamics are driven by global activity-dependent inhibition, with recurrent connections shaping bump-like stationary states. The theoretical analysis demonstrates that in cases where neural interactions are Gaussian and inhibition strengths fall within a specific threshold, solvable models with continuous bump-shaped stationary states are achievable.

Encoding of When

Encoding the 'when' aspect involves manipulating the global inhibition strength across different layers of the neural population, thereby enabling the temporal information retention while preserving stimulus identity. An edge attractor modulates the dynamics of inhibition strength, effectively encoding temporal data by varying the bump amplitude across layers.

Analytical Solutions and Supplementary Information

In supplemental sections, the paper explores analytical solutions for the defined models. The edge and bump attractors' properties are mathematically formulated, supporting the theoretical constructs with rigorous derivations. For instance, the paper derives the edge location over time as nˉ(t)=nˉ0+1Δzlogtt0\bar{n}(t) = \bar{n}_0 + \frac{1}{\Delta z}\log\frac{t}{t_0}, and formulates the corresponding speed of the edge.

Implications and Future Developments

The research offers substantial implications for both theoretical neuroscience and practical AI systems. By elucidating models that faithfully encode temporal dynamics in neural circuits, the paper lays foundational principles for advanced temporal processing systems in artificial neural networks. Future developments could leverage these models for more sophisticated and temporally aware AI systems, enhancing their ability to handle tasks that require nuanced temporal understanding and memory.

In summary, Wang's work offers a comprehensive examination of neural circuit models for temporal representation using continuous attractor networks. The detailed theoretical constructs and analytical formulations provide a robust platform for future advancements in time-encoding neural systems.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com