Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 191 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Matrix Memory Update (mLSTM)

Updated 31 October 2025
  • Matrix Memory Update (mLSTM) is a recurrent neural mechanism that generalizes state from vectors to matrices using structured, multiplicative updates.
  • It employs outer-product updates of key and value vectors along with gating to create a robust associative memory with enhanced capacity.
  • Empirical studies show improved long-context modeling and parallelization, though challenges remain in compute efficiency and training stability.

Matrix Memory Update (mLSTM) refers to a set of recurrent neural network memory mechanisms in which the internal memory is generalized from a scalar or vector to a matrix (or, more generally, a tensor), and the memory update is performed using structured, often multiplicative, interactions between key, value, and gate vectors. This paradigm encompasses both early multiplicative LSTM (mLSTM) architectures that modify hidden-to-hidden transitions in conventional LSTMs, and more recent matrix LSTM variants (notably the fully parallelizable xLSTM-mLSTM), which extend the cell state to a matrix updated by outer-product (covariance) rules. Matrix memory update schemes are motivated by the need for increased expressiveness, input-dependent transition dynamics, enhanced memory capacity, and scalability in sequence modeling.

1. Architectural Principles of Matrix Memory Update

Matrix memory update architectures build upon the limitations of scalar or vectorial recurrence, adopting matrix-valued memory states to enrich memory dynamics:

  • In traditional LSTM, the memory (cell) state ctc_t is a vector; its update ct=ftct1+itgtc_t = f_t \odot c_{t-1} + i_t \odot g_t mixes the prior state and current candidate via elementwise gates.
  • Matrix mLSTM (notationally $\BC_t \in \mathbb{R}^{d \times d}$, e.g. in xLSTM) generalizes this by storing a d×dd\times d state updated at each time step using an outer product between key and value vectors drawn as projections of input or hidden states:

$\BC_t = \Rf_t \BC_{t-1} + \Ri_t\, \Bv_t \Bk_t^\top$

where $\Rf_t$ and $\Ri_t$ are scalar or vector gates (possibly exponential or sigmoid), and $\Bv_t, \Bk_t$ are value and key vectors, respectively.

These architectures often maintain a separate normalizer vector

$\Bn_t = \Rf_t \Bn_{t-1} + \Ri_t \Bk_t$

to stabilize readout and provide normalization for the memory query.

In the original multiplicative LSTM (Krause et al., 2016), matrix-valued intermediate states are also constructed via elementwise multiplication of input and hidden projections, yielding input-dependent hidden-to-hidden dynamics:

mt=(Wmxxt)(Wmhht1)m_t = (W_{mx} x_t) \odot (W_{mh} h_{t-1})

with mtm_t used in gate updates and pre-activations for richer, input-dependent transitions.

2. Mathematical Formulation of the Matrix Memory Update

The general update in matrix-valued memory LSTM variants (e.g., (Beck et al., 7 May 2024, Beck et al., 18 Mar 2025, Lawan et al., 1 Jul 2025)) is:

$\begin{align*} \BC_t & = \Rf_t \BC_{t-1} + \Ri_t\, \Bv_t \Bk_t^\top \ \Bn_t & = \Rf_t \Bn_{t-1} + \Ri_t \Bk_t \ \tilde{\Bh}_t & = \frac{\BC_t \Bq_t}{\max\big(|\Bn_t^\top \Bq_t|, 1\big)} \ \Bh_t & = \bfo_t \odot \tilde{\Bh}_t \end{align*}$

  • $\Rf_t, \Ri_t, \bfo_t$ are forget, input, and output gates; typically computed via affine projections of input.
  • $\Bk_t, \Bv_t, \Bq_t$ are key, value, and query vectors (via learned linear projections).
  • The memory update is a sum of the decayed previous cell and a weighted outer product, corresponding to an associative (covariance) memory.
  • The normalizer vector in the denominator of the readout normalizes for the number of stored key-value associations, stabilizing the output signal.

In architectures requiring high numerical stability (e.g., for long-contexts and exponential gating), additional "max state" and normalization tracking is sometimes included, as in (Beck et al., 18 Mar 2025).

In classical multiplicative LSTM (Krause et al., 2016), the focus is on multiplicative, input-dependent transitions: mt=(Wmxxt)(Wmhht1) h^t=Whxxt+Whmmt it=σ(Wixxt+Wimmt) ft=σ(Wfxxt+Wfmmt) ot=σ(Woxxt+Wommt) ct=ftct1+ittanh(h^t) ht=tanh(ct)ot\begin{aligned} m_t & = (W_{mx} x_t) \odot (W_{mh} h_{t-1}) \ \hat{h}_t & = W_{hx} x_t + W_{hm} m_t \ i_t & = \sigma(W_{ix} x_t + W_{im} m_t) \ f_t & = \sigma(W_{fx} x_t + W_{fm} m_t) \ o_t & = \sigma(W_{ox} x_t + W_{om} m_t) \ c_t & = f_t \odot c_{t-1} + i_t \odot \tanh(\hat{h}_t) \ h_t & = \tanh(c_t) \odot o_t \end{aligned} Here, the memory update remains vector-valued but with input-dependent, multiplicative state transitions.

3. Expressivity, Memory Capacity, and Theoretical Properties

Matrix memory updates substantially enhance the representational power and capacity of sequential models:

  • Expressivity: Matrix-valued memories can encode higher-order associations directly, allowing the network to store and superimpose many key-value bindings in a single cell via the outer product mechanism (Beck et al., 7 May 2024).
  • Capacity: Fisher information analyses (Renanse et al., 2021) show an N×NN\times N matrix memory theoretically supports N2N^2 independent memory slots, compared to NN in a vector-RNN—this quadratic scaling represents a substantial increase in potential memory capacity.
  • Associative Recall: mLSTM-style covariance updates directly implement a form of fast weights or bidirectional associative memory, enabling key-based retrieval of values. This property underpins improved performance on tasks like algorithmic recall, in-context rare token memorization, and tasks requiring persistent storage of unique bindings.

Not all such architectures saturate this theoretical bound (e.g., practical capacity is reduced by recurrent connectivity, nonlinearity, and optimization constraints), but empirical results demonstrate major gains over scalar/vector alternatives.

4. Empirical Performance and Practical Implementations

Matrix memory updates deliver strong practical performance in challenging sequence modeling domains:

Architecture/Model Task & Metric Performance
mLSTM (Krause et al., (Krause et al., 2016)) text8 (BPC) 1.27 (ties SOTA)
xLSTM-mLSTM (Beck et al., 7 May 2024) 350M model, SlimPajama (PPL) 13.43 (better than LLaMA, GPT-3)
mLSTM (TFLA kernel, (Beck et al., 18 Mar 2025)) Kernel runtime, TT up to 65k Fastest memory-efficient kernel, no perplexity loss
WeiNet (learned associative update, (Zhang et al., 2017)) Long associative recall 100% (sequences of length 50)
Matrix NTM (Renanse et al., 2021) Matrix copy/recall tasks Reliable long-horizon retention, surpasses Matrix RNN

Additional findings include robust recovery from rare/surprising inputs (Krause et al., 2016), superior scaling of memory with model and context size (Beck et al., 7 May 2024), and improved efficiency with fused kernels and sigmoid gating (Beck et al., 18 Mar 2025).

In practical large-scale implementations, efficient parallelization is enabled via the removal of sequential memory mixing: All time steps can be unrolled as a series of outer-product updates, reducible to lower-triangular matrix computations and suitable for highly parallelized GPU execution (Beck et al., 7 May 2024, Beck et al., 18 Mar 2025). Recent kernel advancements (Tiled Flash Linear Attention, TFLA) further accelerate these architectures, yielding throughput competitive with (or surpassing) optimized attention kernels (Beck et al., 18 Mar 2025).

Several notable matrix memory update architectures and related memory mechanisms exist:

  • Multiplicative LSTM (original, (Krause et al., 2016, Maupomé et al., 2019)): Focuses on multiplicative factorization to yield input-dependent recurrent transitions, maintaining vector or scalar memory.
  • xLSTM-mLSTM (Beck et al., 7 May 2024): Fully parallelizable, outer-product memory matrix with exponential or sigmoid gating and normalization mechanisms.
  • WeiNet (Zhang et al., 2017): Generalizes fast weights with learnable memory update matrices, supporting an array of associative memory slots and cross-memory routing.
  • Array-LSTM (Rocki, 2016): Augments LSTM with multiple parallel vector memories (array slots) per unit, occasionally selected stochastically or with attentional gating, improving generalization.
  • Matrix NTM (Renanse et al., 2021): Embeds matrix controllers and matrix-valued external memory—yielding efficient, high-capacity programmable memory for algorithmic sequence tasks.
  • MEGA/xLSTM bi-directional mLSTM (Lawan et al., 1 Jul 2025): Blends forward and partially-flipped (local) mLSTM blocks in a multihead fusion architecture for aspect-based sentiment, outperforming strong baselines.
Model Memory Structure Update Rule Key Attribute
mLSTM Matrix (outer product) $\Rf_t \BC_{t-1}+\Ri_t \Bv_t\Bk_t^\top$ Associative, parallelized
WeiNet Matrix WAAt1+Wh(htht)W_A\odot A_{t-1}+W_h\odot(h_t\otimes h_t) Learned per-element update
Array-LSTM Array (multiple cells) Pool or select several ctc_t Parallel, stochastic/attentive lanes
Matrix NTM Matrix memory blocks Erase/add bilinear matrix ops Programmable, external memory
MEGA/xLSTM Matrix + fusion Multihead outer-product/fusion Bi-directional, local/global mixing

6. Limitations, Constraints, and Open Issues

While matrix memory update schemes provide enhanced expressivity and parallelization, several challenges persist:

  • Quadratic Memory and Computation: The size of the memory matrix (typically d×dd \times d per head) leads to increased compute and memory demands, though mitigated both by highly parallel implementation and by careful architectural/hyperparameter selection (Beck et al., 7 May 2024, Beck et al., 18 Mar 2025).
  • Training Stability: Exponential gates may require special initialization and normalization techniques (e.g., tracking max state, using normalizer vectors) to prevent instability, exploding gradients, or vanishing signal (Beck et al., 7 May 2024, Beck et al., 18 Mar 2025).
  • Saturation of Theoretical Capacity: While N2N^2 scaling is possible in principle, practical capacity is generally lower due to weight structure, nonlinearity saturation, and optimization difficulty (Renanse et al., 2021).
  • Hardware/Kernel Maturity: Compared to highly engineered attention kernels, matrix memory update kernels remain less mature, but recent progress has closed much of this gap (Beck et al., 18 Mar 2025).

Further investigation into scaling behaviors, approximations (e.g., low-rank or factored matrix memory), and hardware-specific optimizations continues to be an active area of research.

7. Impact and Applications

Matrix memory updates have materially advanced the field of sequential modeling and memory-augmented neural architectures:

  • Language Modeling and LLMs: xLSTM-mLSTM and variants close or surpass the performance gap with transformer models on long-context and memory-rich language tasks, providing new design tools for parallel, recurrent alternatives to attention (Beck et al., 7 May 2024, Lawan et al., 1 Jul 2025).
  • Algorithmic Reasoning: Matrix NTMs and associative memory mechanisms enable tasks requiring durable, addressable storage and programmable memory manipulation (Renanse et al., 2021).
  • Efficient Long-context Modeling: Hardware-aware kernel designs (TFLA) combined with parallelizable matrix memory update allow practical long-sequence training, facilitating efficient modeling of contexts up to T=65kT=65\textrm{k} or beyond (Beck et al., 18 Mar 2025).
  • Aspect-based Sentiment and Fine-grained NLP: Hybrid bidirectional mLSTM with fusion operators (MEGA) achieves state-of-the-art on benchmarks requiring both local and global context (Lawan et al., 1 Jul 2025).

A plausible implication is that the matrix memory update paradigm will underpin future advances in recurrent and long-context models, offering an orthogonal scaling path to depth and width in neural sequence architectures.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Matrix Memory Update (mLSTM).