Papers
Topics
Authors
Recent
2000 character limit reached

Bidirectional Mamba Backbone

Updated 22 November 2025
  • Bidirectional Mamba Backbone is a neural architecture that extends unidirectional Mamba blocks into a two-way selective scan framework for efficient modeling.
  • It employs dual directional state-space model pathways with directional convolutions and gating to capture global sequence and spatial dependencies.
  • Empirical evaluations show that the approach achieves competitive accuracy and efficiency across vision, language, and multimodal domains compared to transformer baselines.

A Bidirectional Mamba Backbone is a neural architecture that generalizes the unidirectional state-space Mamba block into a two-way, parallel selective scan framework. This design enables efficient linear-time modeling of both forward and backward dependencies in sequential or spatial data while maintaining—or substantially improving—the scalability, expressiveness, and hardware efficiency relative to transformer-based self-attention. Initially developed to address the representational limitations imposed by Mamba’s original causal (unidirectional) recurrence, the bidirectional Mamba backbone has become a central component in state-of-the-art vision, sequence, and multimodal learning systems (Ibrahim et al., 11 Feb 2025).

1. Algorithmic Structure of the Bidirectional Mamba Block

A prototypical bidirectional Mamba block processes a 3D token tensor Tl1RB×M×DT_{l-1}\in\mathbb{R}^{B\times M\times D}, where BB is batch size, MM sequence (or patch) length, and DD feature dimension. The main algorithmic steps are as follows:

  1. Layer Normalization: Normalize the input,

Tl1=Norm(Tl1)T'_{l-1} = \mathrm{Norm}(T_{l-1})

  1. Linear Projections: Project tokens into latent spaces for the SSM and gating,

x=Linearx(Tl1) ; z=Linearz(Tl1)x = \mathrm{Linear}^x(T'_{l-1}) \ ;\ z = \mathrm{Linear}^z(T'_{l-1})

  1. Directional SSM Pathways:
    • For each direction (forward, backward):
      • Apply a direction-aware 1D conv + SiLU activation,

    xo=SiLU(Conv1do(x))x_o' = \mathrm{SiLU}(\mathrm{Conv1d}_o(x))

- Produce SSM parameters (Bo,CoB_o, C_o), step-size Δo=log[1+exp()]\Delta_o=\log[1+\exp(\cdot)], and per-timestep SSM matrices:

Aˉo=ΔoParamoA,Bˉo=ΔoBo\bar{A}_o = \Delta_o \otimes \mathrm{Param}^A_o,\quad\bar{B}_o = \Delta_o \otimes B_o

- Run a unidirectional SSM filter per pathway,

yo=SSM(Aˉo,Bˉo,Co)(xo)y_o = \mathrm{SSM}(\bar{A}_o, \bar{B}_o, C_o)(x_o')

  1. Directional Output Gating: Both yfwdy_\mathrm{fwd} and ybwdy_\mathrm{bwd} are gated by SiLU(z)\mathrm{SiLU}(z),

yfwdyfwdSiLU(z),ybwdybwdSiLU(z)y_{\text{fwd}} \leftarrow y_{\text{fwd}} \odot \mathrm{SiLU}(z),\quad y_{\text{bwd}} \leftarrow y_{\text{bwd}} \odot \mathrm{SiLU}(z)

  1. Fusion and Output Projection: The outputs are added and projected back to match DD,

Tl=LinearT(yfwd+ybwd)+Tl1T_{l} = \mathrm{Linear}^T(y_{\text{fwd}} + y_{\text{bwd}}) + T_{l-1}

This block is recurrently stacked to build the full backbone (Ibrahim et al., 11 Feb 2025).

2. Bidirectional State Space Model Formalism

Each directional SSM models the input sequence via a selective, input-dependent state transition: x˙(t)=Ax(t)+Bu(t),y(t)=Cx(t)\dot{x}(t) = A\, x(t) + B\, u(t),\quad y(t) = C\, x(t) For discretized sequence positions: hk=Akhk1+Bkuk,yk=Ckhkh_k = \overline{A}_k\, h_{k-1} + \overline{B}_k\, u_k,\quad y_k = C_k\, h_k where A,B,C\overline{A},\overline{B},C are dynamically parameterized by small neural networks, enabling adaptive, hardware-friendly filtering. Bidirectional blocks independently apply this procedure in both time directions: forward (k=1,,Mk=1,\ldots,M) and backward (k=M,,1k=M,\ldots,1), using direction-specific convolution, SSM parameters, and gating (Ibrahim et al., 11 Feb 2025).

3. Fusion Mechanisms and Gating

A central aspect of bidirectional Mamba is the gating and fusion of forward and backward SSM streams. Each output is modulated by a shared gate vector (often derived from the input or from an additional projection). In the canonical construction: ymix=yfwdSiLU(z)+ybwdSiLU(z)y_{\mathrm{mix}} = y_\mathrm{fwd}\odot\mathrm{SiLU}(z) + y_\mathrm{bwd}\odot\mathrm{SiLU}(z) This joint stream is then post-processed (typically by a linear projection plus residual shortcut). While the main survey (Ibrahim et al., 11 Feb 2025) does not detail cross-scan interactions beyond this dual-gating, other variants in the literature may employ learnable gates, dense attention-fusion, or more sophisticated selection mechanisms for context-dependent prioritization between directions.

4. Empirical Impact and Architectural Advantages

Bidirectional Mamba blocks deliver several critical benefits over unidirectional SSMs or quadratic-complexity self-attention:

  • Global context restoration: By fusing causal and anti-causal information, each position receives global sequence context, crucial for tasks such as dense prediction, localization, and semantic understanding.
  • Linear complexity: The total computational and memory cost per block scales as O(BM(EN+conv+D))O(B\,M(E\,N+\text{conv}+D)), which is linear in sequence/image size MM, enabling practical deployment at resolutions (or sequence lengths) where transformers become infeasible.
  • Empirical superiority: Benchmarks across vision, language, and multimodal domains demonstrate that bidirectional Mamba backbones achieve accuracy, mAP, or mIoU competitive with or outperforming strong transformer baselines, while using a fraction of the computational resources (Ibrahim et al., 11 Feb 2025).

5. Architectural Variants and Open Design Axes

Variants of the bidirectional Mamba backbone have appeared under numerous model labels (Mamba, VMamba, VideoMamba, and others). All share the essential dual-scan selective SSM block with gating and residual integration, but may instantiate distinct choices for:

  • Direction-aware preprocessing: Different convolutional kernels, position embeddings, or normalization per scan direction.
  • Depth and hierarchy: Number of bidirectional layers, interleaving with downsampling, or hybrid stacking with attention blocks.
  • Parameter sharing: Whether the forward and backward SSMs share parameters or maintain distinct independent kernels.
  • Selective scanning and cross-scan mechanisms: While the baseline implementation uses only dual-gated fusion, extended architectures may experiment with cross-attention between directional streams or additional selective gating, but these are not specified in (Ibrahim et al., 11 Feb 2025).

6. Scope and Limitations

The architectural description here, drawn from (Ibrahim et al., 11 Feb 2025), covers in precise detail the structure and algorithmic flow of the “Vim Block Process 2” bidirectional SSM block: specifically, two direction-aware SSM pathways operating in parallel, gated and fused via elementwise operations and projected back to the input feature dimension. Other potential enhancements such as explicit cross-scan modules, advanced positional encoding schemes, hierarchical multi-stage designs, and full complexity or memory analysis are not detailed in the survey’s excerpt.

7. Summary Table: Bidirectional Mamba Block Workflow

Step Operation/Equation Output Shape
Input Tl1T_{l-1} B×M×DB\times M\times D
LayerNorm Tl1=Norm(Tl1)T'_{l-1} = \mathrm{Norm}(T_{l-1}) B×M×DB\times M\times D
Projections x=Linearx(Tl1)x = \mathrm{Linear}^x(T'_{l-1}), z=Linearz(Tl1)z = \mathrm{Linear}^z(T'_{l-1}) B×M×EB\times M\times E
Conv1d+SiLU xo=SiLU(Conv1do(x))x_o' = \mathrm{SiLU}(\mathrm{Conv1d}_o(x)) (fwd, bwd) B×M×EB\times M\times E
SSM params Bo,Co,ΔoB_o,\,C_o,\,\Delta_o via linear/softplus B×M×N/EB\times M\times N/E
SSM filtering yo=SSM(Aˉo,Bˉo,Co)(xo)y_o = \mathrm{SSM}(\bar A_o,\bar B_o,C_o)(x_o') B×M×EB\times M\times E
Output gating yoSiLU(z)y_o \odot \mathrm{SiLU}(z) B×M×EB\times M\times E
Fuse and project Tl=LinearT(yfwd+ybwd)+Tl1T_l = \mathrm{Linear}^T(y_\mathrm{fwd} + y_\mathrm{bwd}) + T_{l-1} B×M×DB\times M\times D

This table distills the layerwise computation for reproducibility and precise architectural understanding, allowing direct implementation of bidirectional SSM-based backbones for scalable, context-rich modeling in vision and related domains (Ibrahim et al., 11 Feb 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Bidirectional Mamba Backbone.