Papers
Topics
Authors
Recent
Search
2000 character limit reached

Recurrent Adapters: Efficient Neural Adaptation

Updated 7 February 2026
  • Recurrent adapters are parameter-efficient neural modules that integrate recurrence across network layers or time for efficient model adaptation.
  • They leverage shared, recurrent computations to reuse adapter parameters, significantly reducing storage and compute costs per task.
  • Empirical studies show that strategic placement of recurrent adapters can match or outperform full fine-tuning with minimal added parameters.

Recurrent adapters are parameter-efficient neural modules designed to adapt large pre-trained models to novel downstream tasks, particularly in contexts where full fine-tuning or standard adapter approaches incur high storage, computation, or per-task parameter costs. Unlike conventional adapters that operate independently at each layer or along strictly feed-forward computation paths, recurrent adapters explicitly introduce temporal or depth-wise recurrence into the adaptation process. This architectural property enables reuse of adapter parameters across a model’s depth or over sequential inputs, offering unprecedented efficiency in both single- and multi-task transfer settings.

1. Architectural Principles of Recurrent Adapters

Recurrent adapters extend the conventional adapter paradigm in two major ways: (i) by introducing stateful, recurrent computations (over either depth or sequence), and (ii) by leveraging flexible placement strategies, including cyclic (recurrent) connectivity within the network graph.

Depth-wise recurrence is exemplified in the Hierarchical Recurrent Adapter (HRA) architecture for large speech models (Munkhdalai et al., 2024). HRA consists of a single lightweight recurrent controller, implemented via an IndRNN, whose parameters are shared across all layers of a backbone (e.g., Conformer or Transformer). For each downstream task nn, only a tiny task-specific adapter head is introduced, re-used across every model layer. The controller processes input activations and its own previous hidden state, recursively propagating adaptation signals through the model depth. The transformation at layer \ell is:

h=ReLU(Wx+uh1+b)h_\ell = \mathrm{ReLU}(W x_\ell + u \odot h_{\ell-1} + b)

where WW is an input projection, uu a recurrent scaling vector, and bb a bias. The task-specific head then maps hh_\ell to an output oo_\ell, added residually to xx_\ell to produce xx'_\ell for the next layer.

Sequential recurrence in Recurrent Adapters for video-language tasks (Nguyen et al., 2023) is realized via an "inserted" module in each Transformer block. Here, activations are first down-projected, then processed by a simple RNN (or variants like GRU/LSTM), before being up-projected and added residually. This design enables explicit modeling of temporal dependencies in the low-dimensional adapter space. For token (or video frame) timestep tt:

ht=tanh(Wxxt+Whht1+b)h_t = \tanh(W_x x_t + W_h h_{t-1} + b)

This recurrent module is parameterized independently in each block, in contrast to HRA’s cross-layer sharing.

Graph-based recurrence generalizes the adapter search space beyond sequential insertions to include inter-block and cyclic edges (i.e., recurrent adapters), as formalized in (Nowak et al., 2024). A recurrent adapter is represented as an edge (i,j)(i, j) with i>ji > j in the model's computation graph, effectively feeding information from deeper to shallower layers.

2. Mathematical Formulation and Placement

The unifying mathematical abstraction for adapters is a residual bottleneck transformation:

Aϕ(x)=α σ(LN(x)Wdown)WupA_{\phi}(x) = \alpha~\sigma(\mathrm{LN}(x) W_{\text{down}}) W_{\text{up}}

with WdownRdmodel×rW_{\text{down}} \in \mathbb{R}^{d_{\text{model}} \times r}, WupRr×dmodelW_{\text{up}} \in \mathbb{R}^{r \times d_{\text{model}}}, and rdmodelr \ll d_{\text{model}}. For recurrent adapters:

  • Depth-wise recurrence in HRA applies the controller and head across every layer, sharing weights and propagating state, minimizing per-task overhead while maintaining expressivity (Munkhdalai et al., 2024).
  • Sequential recurrence in READ situates adapters after feed-forward layers, with RNN logic over sequence dimension, updating only adapter parameters during fine-tuning (Nguyen et al., 2023).
  • Graph-based recurrence allows adapter placements at arbitrary pairs of layers; in the language of (Nowak et al., 2024), the encoder is a directed graph, each adapter an edge, with recurrent adapters corresponding to i>ji > j.

Empirically, random or strategic placement of even a few recurrent adapters can match or outperform traditional dense adapter arrangements. Notably, the optimality of placement is highly task-dependent, and gradient-rank-based heuristics provide effective criteria for adapter selection (Nowak et al., 2024).

3. Parameter Efficiency and Empirical Performance

Recurrent adapters achieve superior parameter efficiency compared to flat (per-layer, parallel) adapter approaches:

Adapter Type Single-Task Params (M) WER/Accuracy Notes
Full Fine-tune 1.8–2 B ≈5.3% (USM) All parameters trainable
LoRA 7.9 6.4% r = 8
Residual Adapter 6.4 6.2% bottleneck = 64
HRA Linear Head 0.814 6.2% 8× fewer params than LoRA
HRA FFN Head 13.6 5.2% Outperforms full fine-tune
READ (PVLA) ≤0.16 76.1–83.4 mAP Outperforms full fine-tune, video-LM

In multi-task settings, HRA yields sub-linear total parameter growth due to global controller sharing and layer-wise parameter reuse. For 128 adaptation tasks, total parameters with HRA (FFN head) remain only 1–4% of a full fine-tune, with word error rates within 0.6% of the baseline (Munkhdalai et al., 2024).

In video-language adaptation, READ outperforms both classic adapters and full fine-tuning—despite tuning <1.2% of weights—across multiple benchmarks (e.g., 83.39 mAP on TVSum, +4–9 points over LoRA/Adapter baselines) (Nguyen et al., 2023).

4. Design Considerations and Theoretical Rationale

The core operational insight is that recurrence across depth (layers) or time (tokens/frames) allows the same compact parameter set to adapt contextual features at multiple points, penalizing over-parameterization without sacrificing model capacity. In HRA, hierarchy further decouples shared representation learning (controller) from task-specific specialization (heads); adding new tasks requires only small, layer-shared heads, not full per-layer insertion.

Graph-based studies reveal that the adapter’s effectiveness depends not on sheer quantity but on strategic placement: empirically, in 6/7 transfer benchmarks the single best adapter is recurrent (high–low layer connection), with accuracy near or exceeding 24 parallel adapters (Nowak et al., 2024). Gradient-rank metrics strongly correlate with effectiveness, suggesting adapter placement should maximize the rank of the resulting gradient for the downstream task.

5. Specialization to Task Structure and Sequence Modeling

Recurrent adapters are advantageous in adaptation settings where the underlying data has strong temporal or sequential dependencies (as in speech, low-resource video-language, or long-context NLP). For instance, the READ recurrent step explicitly models temporal relationships among frames/words—improving both statistical and human-perceived task performance (Nguyen et al., 2023).

For multi-task scenarios, the hierarchical structure in HRA allows practitioners to freeze global layers and tune minimal, task-specific heads, enabling scalable deployment across hundreds of downstream objectives with negligible memory or compute impact (Munkhdalai et al., 2024).

A notable technique in (Nguyen et al., 2023) is the Partial Video–Language Alignment (PVLA) objective, which regularizes adapters toward preserving cross-modal information relevant to task performance by optimizing a partial optimal transport loss at every block. This encourages the adapter to focus only on well-aligned (semantically similar) video–text pairs, avoiding "bottleneck pollution" from irrelevant features.

6. Practical Guidelines for Implementation

  • Adapter Rank Selection: For large-scale vision, r128r \approx 128; for few-shot VTAB tasks, r8r \approx 8 suffices (Nowak et al., 2024).
  • Placement: One recurrent adapter placed according to gradient-rank heuristics (highest–lowest layer pairs) often outperforms uniform distributed adapters (Nowak et al., 2024).
  • Parameter Sharing: Layer-wise sharing is critical for parameter minimization (as in HRA); per-block independence allows more flexibility but increases storage (as in READ).
  • Training Efficiency: One extra forward pass is needed during training for recurrent (cyclic) adapters but not at inference. Freeze backbone weights and fine-tune only adapters for maximal speed and stability. For streaming or privacy-sensitive domains, pre-train controllers out-of-domain and fine-tune heads only on-target data (Munkhdalai et al., 2024).
  • Adapter Initialization: Use Kaiming normal for projections; zero initialization for recurrent weights and biases is standard (Nguyen et al., 2023).
  • PVLA Use: For sequence or cross-modal tasks, partial optimal transport alignment (with λ1\lambda \approx 1) stabilizes adaptation and enhances selectivity.

7. Broader Implications and Future Directions

The integration of recurrence and strategic parameter sharing in adapters enables robust, scalable, and memory-efficient adaptation for large language, vision, and speech models. The principled use of recurrent adapters and the adoption of graph-based placement strategies mark a paradigm shift: performance can be maximized not by brute-force parameter increase but by architectural and placement intelligence, combined with theoretically motivated selection metrics such as gradient rank (Nowak et al., 2024). A plausible implication is that future parameter-efficient methods may converge on architectures that combine hierarchical sharing, recurrence (in time and depth), and information-theoretic placement to minimize adaptation cost even further.

The continued development of recurrent adapter techniques is expected to impact large-scale multi-task learning, on-device adaptation, privacy-sensitive model deployment, and low-resource task transfer scenarios (Munkhdalai et al., 2024, Nguyen et al., 2023, Nowak et al., 2024).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Recurrent Adapters.