RLCNet: Neural Dynamics & Sensor Calibration
- RLCNet is a dual research topic encompassing both biologically-inspired recurrent networks with liquid resistance and capacitance, and an autonomous vehicle sensor calibration system.
- Liquid resistance/capacitance networks feature adaptive time gating, stability via bounded non-linearities, and efficient parallel sequence computation.
- The sensor calibration branch uses ResNet-based feature extraction, message-passing networks, and cascaded refinements for real-time multimodal sensor alignment.
RLCNet refers to two distinct but influential research directions in contemporary machine learning literature. The first denotes the class of Liquid-Resistance Liquid-Capacitance neural networks (LRCs, or "RLCNet"), a family of biologically plausible recurrent neural architectures inspired by electrical equivalent circuit modeling and continuous-time dynamics (Farsang et al., 30 Jan 2024, Farsang et al., 27 May 2025). The second RLCNet, introduced in the context of autonomous vehicle sensor calibration, denotes an end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and camera (Cholakkal et al., 9 Dec 2025). Both strands share the acronym but address fundamentally different technical domains. This entry provides a comprehensive survey of both lines, with explicit delineation of technical details, methodologies, theoretical properties, and empirical evaluations.
1. Liquid-Resistance Liquid-Capacitance Networks (LRCNet): Foundations and Mathematical Model
LRCNet (in the LRC context) generalizes classical electrical equivalent circuits (EECs) by introducing state- and input-dependent resistance and capacitance—termed liquid resistance and liquid capacitance, respectively—into the neuron model. The continuous-time membrane potential dynamics for neuron are given by
where
with denoting concatenated pre- and post-synaptic states and external input, , , being the sigmoid, the tanh nonlinearity, and learned synaptic parameters. The explicit inclusion of bounded, state-driven conductances avoids runaway dynamics and imparts a direct correspondence between network function and biophysical substrates (Farsang et al., 30 Jan 2024).
In the “LrcSSM” discretization, a diagonal state-transition matrix enables parallel sequence computation. Each neuron’s updates are governed by
with ; is diagonal by design. Sequence computation can be performed in parallel via a single prefix-scan with cost and sequential depth =sequence length, =state dimension.
2. Discretized Update: RLC Units and Time-Gating
Discretization of the LRC ODE via the explicit Euler scheme yields the RLC Unit (RLCU), a gated RNN-like module governed by
where is a learned, per-neuron, per-timestep time-gate. Two parameterizations of —asymmetric sigmoid and symmetric difference-of-sigmoids—are found effective. The update decomposes into a forget-gate and an update-gate, analogous to gated RNNs, with the key distinction that the time-gate is data-adaptive and interpretable as variable step size. All key parameters () are learned directly from data (Farsang et al., 30 Jan 2024).
3. Theoretical Properties
LRCNet demonstrates several theoretically grounded advantages:
- Oscillation Damping: Bounded in liquid capacitance ensures high-frequency input does not induce unbounded membrane fluctuations.
- Stability: The ODE’s structure (with bounded and adaptive ) yields global Lyapunov stability.
- Gradient Stability: In the LrcSSM diagonalized variant, forward and backward signal contraction is governed by the maximal contraction factor . For loss on terminal state , the gradient norm is , preventing gradient explosion and vanishing (Farsang et al., 27 May 2025).
- Generalization: Saturated conductances provide soft clipping, reduce the Lipschitz constant of the discrete update, and improve robustness to input noise.
- Biophysical Plausibility: All network elements directly correspond to interpretable biophysical entities—conductances, membrane capacitance, time constants—enabling transparent mapping to EEC theory.
4. Scaling, Computational Properties, and Empirical Performance
By enforcing diagonal state transitions, LrcSSM admits:
- runtime and memory per layer.
- sequential depth via associativity of the prefix-scan.
- Parameter efficiency: for layers, without -scaling.
- Empirically, LrcSSM matches or exceeds state-of-the-art linear/nonlinear state-space models (LRU, S5, Mamba) on long-range forecasting (UEA-MTSCA) benchmarks, with average accuracy 66.3% 18.6%, close to leading LinOSS-IM (67.8% 21.6%). LRCNet can outperform Transformer baselines at equal compute due to better scaling (Farsang et al., 27 May 2025).
On time-series and sequence benchmarks, RLCNet (with both time-gate variants) consistently matches or outperforms LSTM, GRU, and MGU models in accuracy, convergence speed, and stability. On tasks such as person-activity classification, IMDB sentiment, permuted sequential MNIST, and autonomous-driving lane-keeping, RLCNet yields higher or equal accuracy with faster convergence (often 100 epochs vs 150+ for LSTM/GRU on person-activity classification) and lower oscillatory artifacts (Farsang et al., 30 Jan 2024).
5. RLCNet for Multi-Sensor Calibration in Autonomous Vehicles
A separate thread under the RLCNet acronym focuses on the extrinsic calibration of multimodal sensors in autonomous vehicles (Cholakkal et al., 9 Dec 2025). Here, RLCNet denotes a deep learning architecture for estimating online the rigid-body transformations () between LiDAR, RADAR, and camera.
Key architectural elements include:
- Input Representation: RGB images (camera), projected inverse-depth maps (LiDAR, RADAR), and bird’s-eye-view (BEV) height maps.
- Feature Extraction and Correlation: ResNet-18 backbones for each modality, local correlation cost volumes for sensor pairs, projected and aggregated by MLPs.
- Feature Sharing: Fused features by either direct concatenation or learnable soft-masks (soft-masks yield improved accuracy).
- Prediction Head: Parallel MLPs generate translation () and rotation ( quaternion) for each sensor pair.
- Message-Passing Network (MPN): Iterative update of estimated transformation matrices to enforce loop-closure consistency over the sensor triplet.
The framework is trained with a composite loss: pose loss ( on translation/SmoothL1 and rotation/quaternion angular distance), point-cloud alignment loss (), loop-closure loss (), and a penalty to preserve intermediate accuracy (). Five-stage cascaded refinement progressively handles smaller miscalibration ranges (rotational 10°1°, translational 50 cm5 cm per stage).
6. Online Calibration, Outlier Rejection, and Real-Time Deployment
For robust online operation in dynamic environments, the multi-sensor RLCNet incorporates:
- Weighted Moving Average: Exponential smoothing of quaternion/translation outputs over a 12-frame window.
- Outlier Rejection: Only predictions with consecutive frame-wise change below calibrated angular (0.05°) and translational (1 cm) thresholds are admitted to the smoothing buffer.
- Calibration Update Policy: If aggregate error exceeds cm for any sensor pair, the running calibration is updated and the smoothing window reset. Loop-closure logic localizes drifted sensors.
This approach enables detection of sensor drift (≥0.08° or ≥1.6 cm within two frames), supporting real-time deployment at 10–20 Hz (54 ms per five-stage cycle), validated on the View-of-Delft (VoD) dataset. It achieves rotation errors of 0.220°–0.240° and translational errors below 1.5 cm for all sensor pairs (Cholakkal et al., 9 Dec 2025).
7. Comparative Analysis and Empirical Results
RLCNet sequence models are competitive with, or superior to, established long-range sequence learners. Table summarizing key per-layer computational properties from (Farsang et al., 27 May 2025):
| Architecture | FLOPs per Layer | Memory | Sequential Depth |
|---|---|---|---|
| Mamba | |||
| Liquid-S4 | |||
| S4/Hyena | |||
| Transformer | |||
| LrcSSM |
In the calibration setting, RLCNet demonstrates substantial improvements over baseline methods. For example, in the rotational-only case on the nuScenes benchmark, RLCNet reduces mean angular error and achieves inference runtimes of 54 ms per update versus 2 s for a classic baseline (Cholakkal et al., 9 Dec 2025).
References
- Liquid-Resistance Liquid-Capacitance Networks and their empirical evaluation: (Farsang et al., 30 Jan 2024).
- Scaling Up LRLC Networks for Efficient Sequence Modeling: (Farsang et al., 27 May 2025).
- RLCNet for Multi-Sensor Online Calibration in Autonomous Driving: (Cholakkal et al., 9 Dec 2025).