Papers
Topics
Authors
Recent
2000 character limit reached

CCMamba: Unified Higher-Order Graph Learning

Updated 4 February 2026
  • The paper introduces CCMamba, a unified framework that linearizes combinatorial complex incidence relations for O(N) efficiency.
  • It replaces costly attention mechanisms with selective state-space models to capture directional and long-range dependencies.
  • Empirical results demonstrate improved accuracy and depth robustness on graph, hypergraph, and simplicial complex benchmarks.

Combinatorial Complex Mamba (CCMamba) is a unified neural framework for higher-order graph learning over combinatorial complexes. Leveraging a mamba-based state-space paradigm rather than standard attention-based message passing, CCMamba achieves linear-time, rank-aware, directional, and long-range information propagation across graphs, hypergraphs, simplicial, and cellular complexes. This enables adaptive, expressive, and scalable modeling of topological structures with empirical robustness to depth and strong performance on standard benchmarks (Chen et al., 28 Jan 2026).

1. Motivation and Distinctive Features

Most topological deep learning methods on complex-shaped data rely on local, low-dimensional message passing and fuse rank-coupled signals via self-attention, resulting in quadratic complexity and limited scalability in higher-order topologies. They typically lack capabilities for explicitly directional or global propagation, especially for long-range dependencies across multiple ranks.

CCMamba addresses these bottlenecks by:

  • Unifying the learning process across graphs, hypergraphs, simplicial, and cellular complexes using the combinatorial complex formalism.
  • Linearizing incidence relations of various ranks into structured sequences suitable for state-space sequential models.
  • Replacing expensive attention with selective state-space modules (SSMs, specifically the Mamba architecture) for O(N)O(N) computational complexity.
  • Enabling directional, rank-aware, and global propagation in a single neural layer, bypassing iterative local hops.

This framework thereby extends the expressive and computational potential of topological deep learning beyond the limitations of message passing and self-attention mechanisms.

2. Combinatorial Complexes and Incidence Sequences

A combinatorial complex is defined as the triple (S,C,rk)(\mathcal S, \mathcal C, \mathrm{rk}), where S\mathcal S is a finite vertex set, C⊂2S\mathcal C \subset 2^{\mathcal S} is a collection of cells, and rk:C→Z≥0\mathrm{rk}:\mathcal C \to \mathbb Z_{\ge 0} is an order-preserving function such that σ⊂τ  ⟹  rk(σ)≤rk(τ)\sigma \subset \tau \implies \mathrm{rk}(\sigma) \leq \mathrm{rk}(\tau). This unifies the formalism of graphs, hypergraphs, simplicial complexes, and cellular complexes.

For a dd-dimensional complex, incidence between ranks is encoded via sparse incidence matrices BkB_k:

[Bk]i,j={1if (k−1)-cell i⊂k-cell j 0otherwise[B_k]_{i,j} = \begin{cases} 1 & \text{if } (k-1)\text{-cell } i \subset k\text{-cell }j \ 0 & \text{otherwise} \end{cases}

The rank-aware neighborhood of a kk-cell σ\sigma, Nk→r(σ)\mathcal N_{k \to r}(\sigma), comprises all (k−1)(k-1)-cells and (k+1)(k+1)-cells incident to σ\sigma. For each rank kk, an incidence-linearized sequence is constructed by stacking the features of kk-cells and the incidence-transformed features from adjacent ranks:

xk(ℓ)=[hk(ℓ),  Bk⊤hk−1(ℓ),  Bk+1hk+1(ℓ)]∈Rnk×Dx_k^{(\ell)} = \big[ h_k^{(\ell)},\; B_k^\top h_{k-1}^{(\ell)},\; B_{k+1} h_{k+1}^{(\ell)} \big] \in \mathbb R^{n_k \times D}

where nk=∣Ck∣n_k = |\mathcal C_k| and DD is the projected sequence feature dimension.

3. Selective State-Space Modeling and Layer Design

CCMamba recasts message passing as a state-space sequence modeling problem. The core selective SSM module is parameterized at each layer ℓ\ell and for each rank kk: \begin{align*} A_k &= -\exp(W_{A,k} h_k{(\ell)} + b_{A,k}) \ B_k &= W_{B,k} h_k{(\ell)} \ C_k &= W_{C,k} h_k{(\ell)} \ \Delta_k &= \mathrm{Softplus}(W_{\Delta,k} h_k{(\ell)} + b_{\Delta,k}) \end{align*} The negative exponential on AkA_k ensures state-space stability; Δk\Delta_k is a softplus-learned discretization parameter.

For each rank, the SSM operates over the length-nkn_k sequence xk(â„“)x_k^{(\ell)} to produce outputs yky_k:

yk=SSMselective(Ak,Bk,Ck,Δk,xk(ℓ))y_k = \mathrm{SSM}_{\text{selective}}(A_k, B_k, C_k, \Delta_k, x_k^{(\ell)})

A gate zkz_k modulates the output:

zk=σ(Wz,khk(ℓ)),hintra,k(ℓ)=Wout,k(zk⊙yk)z_k = \sigma(W_{z,k} h_k^{(\ell)}), \quad h_{\text{intra},k}^{(\ell)} = W_{\text{out},k}(z_k \odot y_k)

Global context and directional signaling are enabled because the entire linearized incidence sequence is processed by SSM, so each cell’s output can attend to all positions in the sequence, capturing dependencies over arbitrary distances. SSM convolution handles rank-wise propagation in one step, circumventing hop-by-hop neighborhood aggregation.

Inter-rank fusion is performed by aggregating intra-rank outputs from neighboring ranks via an injective MLP and a fusion operator:

hk(ℓ+1)=ϕ1(hk(ℓ), MLP(∑r∈R(σ){hintra,r(ℓ+1)}))h_k^{(\ell+1)} = \phi_1 \big(h_k^{(\ell)},\, \mathrm{MLP} ( \sum_{r \in \mathcal R(\sigma)} \{ h_{\text{intra}, r}^{(\ell+1)} \}) \big )

where R(σ)\mathcal R(\sigma) is the set of neighboring ranks to cell σ\sigma.

A high-level single-layer pseudocode is:

1
2
3
4
5
6
7
8
9
10
x_k = concat(h_k,
             B_k^T h_{k-1} if k>0 else [],
             B_{k+1} h_{k+1} if k<2 else [])
A_k = -exp(W_Ak h_k + b_Ak)
B_k = W_Bk h_k
C_k = W_Ck h_k
Δ_k = Softplus(W_Δk h_k + b_Δk)
y_k = SSM_selective(A_k, B_k, C_k, Δ_k, x_k)
z_k = sigmoid(W_zk h_k)
h_intra_k = W_out_k(z_k * y_k)

4. Expressive Power and Theoretical Guarantees

CCMamba’s expressive capacity is characterized by the 1-dimensional Combinatorial Complex Weisfeiler-Lehman (1-CCWL) test. The theorem (Theorem 4.3) establishes that — under injectivity assumptions on the inter-rank fusion function ϕ1\phi_1, intra-MLPs, and the READOUT — CCMamba achieves distinguishing power precisely matching that of 1-CCWL. In particular, each layer implements a permutation-invariant mapping over multisets of incident features, and with suitable parameterizations can simulate each recoloring step of the test.

This ensures that CCMamba is as expressive as the strongest message-passing GNNs with respect to distinguishing non-isomorphic combinatorial complexes at the 1-WL level (Chen et al., 28 Jan 2026).

5. Computational Complexity and Scalability

A primary benefit of selective state-space modeling is computational efficiency. For each rank kk, the SSM operates in O(nk)O(n_k) time, where nkn_k is the number of cells of rank kk, plus O(∥Bk∥0)O(\|B_k\|_0) for sparse incidence matrix multiplications. The total runtime is O(∑knk+nnz(B))=O(N)O(\sum_k n_k + \mathrm{nnz}(B)) = O(N) with NN the total number of cells.

By comparison, attention-based fusion, such as in MultiGAT, incurs a quadratic O(N2)O(N^2) cost per neighborhood fusion. Empirically, on the PROTEINS-simplex dataset, MultiGAT requires approximately 1266 s and 23.5 GB, while CCMamba completes in approximately 121 s and 3.3 GB.

Memory usage is O(N+Nstate×d)O(N + N_{\text{state}} \times d) for intermediate cell and state representations, where dd is the SSM state dimension.

6. Experimental Evaluation

CCMamba's performance has been established across various classification benchmarks:

  • Datasets: Graph classification (MUTAG, PROTEINS, IMDB-B, IMDB-M, AMAZON-R, ROMAN-E, MINESWEEPER); node classification (Cora, CiteSeer, PubMed), all with complex topological lifts via TopoBench pipelines.
  • Results: On graph tasks, CCMamba (rank-cell) achieves an average accuracy of 76.41% versus the best baseline of 74.18%. For node tasks, Cora achieves 89.22%, Citeseer 76.95%, and PubMed 89.51%. Graph-level tasks include MUTAG 85.11%, PROTEINS 78.14%, IMDB-B 74.81%, and IMDB-M 50.13%.
  • Ablations: Modeling the full complex (as opposed to only graphs or hypergraphs) yields consistent gains. CCMamba outperforms GAT by 3–6 points across node and graph tasks for all topological structures.
  • Depth Robustness: CCMamba maintains over 87% accuracy on Cora up to 32 layers, whereas traditional methods like GCN or HyperGCN experience collapse after 8 layers.
  • Sensitivity to Multi-Heads: Two to four heads suffice for optimal performance; further increases yield little improvement.
Model (Simplified) Average Accuracy (%) Training Time (s) Memory (GB)
MultiGAT (attention-based) 74.18 ~1266 23.5
CCMamba (rank-cell) 76.41 ~121 3.3

7. Implementation Considerations, Limitations, and Future Directions

Several practical aspects are noteworthy:

  • Implementation: Sparse incidence matrices BkB_k should be precomputed in CSR/COO format for efficient multiplications. Diagonal or low-rank AkA_k matrices are preferred for SSM state stability. Batching layers can amortize Mamba state initialization costs.
  • Limitations: The framework targets moderate cell dimensions (d≤3d \leq 3); scalability to higher dimensions may be constrained by memory and compute due to growing numbers of SSMs. CCMamba assumes fixed complex topology during training; handling dynamic or time-varying complexes requires incremental incidence updates. The injectivity assumptions in the theory may necessitate wide MLPs for certain applications.
  • Future Work: Directions include integrating geometric priors (curvature, persistent cohomology) into the sequence encoding, enforcing E(n)-equivariance by parameter tying to geometric coordinates, handling continuous-time or dynamic complexes via continuous SSMs, and developing hardware-accelerated inference (e.g., GPU/CUDA kernels for SSM).

In summary, CCMamba recasts higher-order message-passing on combinatorial complexes as selective state-space modeling, realizing linear-time, depth-robust, rank-aware, and long-range learning with expressive power matching the 1-CCWL test and empirically strong benchmark performance (Chen et al., 28 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Combinatorial Complex Mamba (CCMamba).