Papers
Topics
Authors
Recent
2000 character limit reached

Liquid Neural Networks (LNNs)

Updated 31 December 2025
  • Liquid Neural Networks (LNNs) are continuous-time models with dynamically modulated time constants that evolve hidden states via nonlinear ODEs.
  • They leverage biophysical principles to adapt memory horizons in response to input changes, enhancing robustness and interpretability.
  • Variants like CfC and LRC enable solver-free updates and efficient hardware deployment, yielding state-of-the-art results in sequence and vision tasks.

Liquid Neural Networks (LNNs) are continuous-time neural architectures whose hidden states evolve via nonlinear ordinary differential equations with dynamically modulated time constants. LNNs generalize traditional recurrent neural networks (RNNs) and neural ODEs by endowing each unit with an input- and state-dependent memory horizon. Originating from biophysical principles—specifically, the variable membrane time-constants of biological neurons—recent theoretical and empirical advances have demonstrated that LNNs offer robust adaptation to nonstationarity, improved interpretability, and state-of-the-art parameter and energy efficiency in sequence modeling, control, and embedded vision tasks (Hasani et al., 2018, Zhu et al., 3 Apr 2025, Farsang et al., 30 Jan 2024, Zong et al., 8 Oct 2025, Pawlak et al., 30 Jul 2024, Bidollahkhani et al., 2023, Smith et al., 2017).

1. Continuous-Time Neuron Models and Liquid Time-Constant Mechanisms

The core of LNNs is the liquid time-constant (LTC) neuron, which operates according to a continuous-time ODE:

h˙(t)=α(h(t),x(t))h(t)+β(h(t),x(t))ϕ(Wx(t)+Uh(t)+b)\dot{h}(t) = -\alpha(h(t),x(t)) \odot h(t) + \beta(h(t),x(t)) \odot \phi(Wx(t) + Uh(t) + b)

where h(t)Rnh(t) \in \mathbb{R}^n is the hidden state, x(t)x(t) is input, WW, UU, bb are trainable parameters, ϕ\phi is a nonlinearity, α()\alpha(\cdot) and β()\beta(\cdot) are “liquid” gates realized by neural submodules, and \odot denotes element-wise multiplication (Zhu et al., 3 Apr 2025). The time constant of each neuron is not fixed; it adapts dynamically as a function of the inputs and the current state:

τi(t)dhidt=hi(t)+ki(t)σ(wiTx(t)+uihi(t)+bi)\tau_i(t) \frac{d h_i}{dt} = -h_i(t) + k_i(t) \sigma\left(w_i^T x(t) + u_i h_i(t) + b_i\right)

with instantaneous τi(t),ki(t)>0\tau_i(t), k_i(t) > 0. Biological plausibility is achieved by using sigmoidal synaptic gating, conductance-based integration, and state-dependent leaky currents as in biological neural membranes (Hasani et al., 2018). This enables both fast adaptation to rapidly changing data and efficient handling of slow signal drifts.

2. Architectures and Closed-Form Variants

LNN architectures can be broadly classified into several categories:

  • LTC networks: Direct ODE implementation with dynamic time constants and input-dependent gates.
  • CfC (Closed-form Continuous-time) networks: Algebraic approximations of LTC ODEs allow solver-free, tractable integration while preserving adaptivity (Zhu et al., 3 Apr 2025, Zong et al., 8 Oct 2025). The CfC closed-form update is particularly amenable to autodiff and fast training.
  • NCPs (Neural Circuit Policies): Sparse, layered arrangements inspired by biological circuits (sensory, interneuron, command, motor) which permit fine-grained interpretability for decision flows (Zhu et al., 3 Apr 2025).
  • LRC (Liquid Resistance–Capacitance) networks: Generalize LTCs by reintroducing variable capacitance and damping, yielding spectral-radius guarantees and global quadratic Lyapunov stability (Farsang et al., 30 Jan 2024). LRCUs (discrete units from explicit Euler) provide a single-step, solver-free update with learnable time gates.
  • Liquid State Machines (LSMs): Fixed random reservoirs of leaky integrate-and-fire (LIF) neurons (often spiking), used in conjunction with linear or nonlinear readout layers (Smith et al., 2017, Pawlak et al., 30 Jul 2024).

CfC and LRC variants typically realize superior computational efficiency and stability, which is critical for hardware deployments.

3. Mathematical Universality, Stability, and Expressive Power

LNNs and particularly LTC networks are proven universal approximators of finite-time trajectories of arbitrary C1C^1 dynamical systems (Hasani et al., 2018). The existence proof relies on the ability to approximate any vector field F(x)F(x) via a feedforward neural block and embed this into the liquid ODE structure. The bounds on time constant and membrane potential guarantee numerical stability:

τi(t)[CmiGLeak,i+jwij+pw^ip,CmiGLeak,i+pw^ip]\tau_i(t) \in \left[\frac{C_{m_i}}{G_{\text{Leak},i}+\sum_j w_{ij}+\sum_p \hat{w}_{ip}}, \frac{C_{m_i}}{G_{\text{Leak},i}+\sum_p \hat{w}_{ip}}\right]

Vi(t)[min(VLeak,i,Eijmin),max(VLeak,i,Eijmax)]V_i(t) \in [\min(V_{\text{Leak},i},E_{ij}^{\min}), \max(V_{\text{Leak},i},E_{ij}^{\max})]

Stability in the ODE regime is formalized using Lyapunov arguments; liquid gates with lower bound α0>0\alpha_0 > 0 and Lipschitz nonlinearities guarantee exponential convergence to unique bounded trajectories under bounded inputs (Zhu et al., 3 Apr 2025). LRCs further damp oscillatory and stiff modes, ensuring spectral-radius contraction for each discrete step (Farsang et al., 30 Jan 2024).

4. Comparison to Traditional RNNs, LSTMs, and Neural ODEs

LNNs differ fundamentally from conventional RNNs and their gated variants:

  • Continuous vs. discrete-time: LNNs model hidden states as ODEs, whereas RNNs/LSTMs/GRUs use discrete step updates (Zong et al., 8 Oct 2025).
  • Adaptive vs. static memory: LTC time constants fluctuate with data, enabling the network to “stretch” or “shrink” memory dynamically. Standard RNNs have fixed gates, limiting adaptability.
  • Solver complexity: LTCs require ODE solvers; CfCs and LRCUs use closed-form or single-step updates, reducing overhead. LNNs match or outperform LSTM/GRU in speed when deployed in a solver-free form (Zhu et al., 3 Apr 2025, Zong et al., 8 Oct 2025, Farsang et al., 30 Jan 2024).
  • Empirical metrics: In traffic forecasting, LTC achieves MSE of 0.099 vs 0.169 (LSTM) (Zong et al., 8 Oct 2025); in gesture recognition, LTC attains 69.55% vs 64.57% (LSTM); CfC trains up to 160× faster than ODE-RNN (Zong et al., 8 Oct 2025).
  • Energy and memory efficiency: Neuromorphic implementations on Loihi-2 yield 91.3% CIFAR-10 accuracy at 213 µJ/frame (Pawlak et al., 30 Jul 2024), surpassing earlier FPGA/ASIC SNN and CNN benchmarks.

5. Training Methods and Hardware Adaptation

LNNs are trained via gradient-based optimization, using backpropagation through time (BPTT) for unfolded ODE traces:

L(θ)=1Ni=1N(yi,y^i(θ))\mathcal{L}(\theta) = \frac{1}{N} \sum_{i=1}^N \ell(y_i, \hat{y}_i(\theta))

θL=1Ni=1Ny^iy^iθ,θθηθL\nabla_\theta \mathcal{L} = \frac{1}{N} \sum_{i=1}^N \frac{\partial \ell}{\partial \hat{y}_i} \frac{\partial \hat{y}_i}{\partial \theta}, \quad \theta \leftarrow \theta - \eta \nabla_\theta \mathcal{L}

CfC and LRCU variants directly leverage autodiff on closed-form layers. Memory efficiency is realized via sparse circuits (NCPs, LRCUs), reversible integration, and quantization—down to 8- or 4-bit weights for embedded hardware (Pawlak et al., 30 Jul 2024, Bidollahkhani et al., 2023). In neuromorphic settings, event-driven updates, local buffer memory, and tuneable leak timers yield real-time low-latency inference (15.2 ms/frame) for vision tasks (Pawlak et al., 30 Jul 2024). The software stack for embedded deployment uses Keras subclasses such as LTCCell, CTRNNCell, NODECell, and CTGRUCell, with hyperparameter flexibility tailored for hardware constraints (Bidollahkhani et al., 2023).

6. Interpretability, Auditability, and Safety

LNNs attain improved interpretability compared to black-box deep nets via:

  • Disentangled ODE gates: Explicit decay (α\alpha) and input-gain (β\beta) roles enable formal audit trails (Zhu et al., 3 Apr 2025).
  • Layered sparse architectures (NCP): Streamlined information flow and isolation of decision subcircuits facilitate explainability in critical settings.
  • Safety guarantees: Lyapunov-style stability margins provide auditability under bounded external disturbances. Gate activations and hidden trajectories can be inspected to understand routing or decision processes in wireless and autonomous systems (Zhu et al., 3 Apr 2025).
  • Biophysical traceability: Many LNN instantiations retain direct correspondence to membrane, synapse, and circuit parameters (Hasani et al., 2018, Farsang et al., 30 Jan 2024).

7. Applications, Benchmark Results, and Deployment Contexts

LNNs have demonstrated state-of-the-art performance across several domains:

  • Telecom and wireless systems: LTCs yield MSE of 0.75×10⁻³ at L=10L=10 for CSI prediction, surpassing LSTM and autoregressive (AR) models (Zhu et al., 3 Apr 2025). In MIMO beamforming, GLNNs (gradient-based LNNs) maintain 10–15% higher spectral efficiency under high mobility—7.2 vs 6.3 bps/Hz at 30 m/s (Zhu et al., 3 Apr 2025).
  • Embedded vision: Loihi-2 neuromorphic implementation achieves 91.3% CIFAR-10 accuracy, besting earlier SNN- and CNN-based hardware (Pawlak et al., 30 Jul 2024).
  • Speech recognition: LSMs with second-order synaptic kernels reach 94.6% single-trial accuracy on Arabic-digits (Smith et al., 2017).
  • Time-series modeling and sentiment analysis: LRCUs deliver 87.0% (IMDB sentiment), 91.7% (sequential MNIST) at 20–40k parameters, outperforming GRU/LSTM with fewer epochs and equal or superior accuracy (Farsang et al., 30 Jan 2024).
  • Energy and memory metrics: On embedded tasks, LTC-SE reduces computational depth and memory by ~5–15% over LTC, matching or exceeding CNN/LSTM, with hidden-unit range optimized for hardware (Bidollahkhani et al., 2023).

8. Open Challenges and Future Directions

Active research threads include:

A plausible implication is that ongoing advances in solver design, algorithm–hardware co-optimization, and theoretical analysis will position LNNs as a preferred architecture for robust, interpretable, and scalable AI in demanding dynamic environments.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Liquid Neural Network (LNN).