Papers
Topics
Authors
Recent
2000 character limit reached

AnchorFormer: Efficient Neural Architectures

Updated 5 December 2025
  • AnchorFormer is a family of efficient neural architectures that utilize a small set of anchor tokens or nodes to aggregate and propagate global information.
  • It leverages domain-specific anchor selection methods, such as Louvain clustering and greedy top-k procedures, to optimize attention in graphs, vision, and multimodal tasks.
  • The use of bipartite and Markov attention mechanisms significantly cuts computational costs while enhancing robustness and performance across different applications.

AnchorFormer designates a family of efficient neural architectures that leverage anchor-based attention mechanisms across several domains: graph representation learning, vision transformers, multimodal LLMs, and feature matching. The fundamental principle is to introduce a small set of anchor tokens or nodes that act as information bottlenecks, enabling long-range dependency modeling with reduced computational complexity and, in many cases, enhanced robustness to noise or outliers. Implementations include AGFormer for graphs (Jiang et al., 2023), AMatFormer for feature matching (Jiang et al., 2023), AnchorFormer for vision transformers (Shan et al., 22 May 2025), and AcFormer for multimodal connectors (Liu et al., 28 May 2024).

1. Anchor Selection and Formalization

Anchor selection is central to all AnchorFormer variants and differs according to the domain. In graph transformers (AGFormer), anchor nodes are obtained via graph clustering; Louvain clustering partitions the graph, with cluster centers serving as anchors. The assignment matrix S{0,1}M×NS \in \{0,1\}^{M \times N} (where MNM \ll N) maps nodes to anchors, and anchor features are aggregated by pooling:

P=D1SHP = D^{-1}S H

where D=diag(d1,,dM)D = \text{diag}(d_1,\dots,d_M), dc=j=1NSc,jd_c = \sum_{j=1}^N S_{c,j}, and HRN×dH \in \mathbb{R}^{N \times d'} are node features (Jiang et al., 2023).

For vision transformers, anchor vectors are parameterized as rows wjSw^S_j of a learnable weight matrix WSRm×dW^S \in \mathbb{R}^{m \times d}, acting as anchor tokens updated via backpropagation. In multimodal language connectors (AcFormer), anchors are defined as a subset of visual tokens—selected by greedy head-wise top-kk procedures based on the [CLS] attention map—as high-attention, high-variance tokens (Liu et al., 28 May 2024). For feature matching (AMatFormer), top-kk anchor pairs are extracted via initial nearest neighbor matching and ratio tests between intra-image descriptors (Jiang et al., 2023).

2. Anchor-Based Attention Mechanisms

AnchorFormer architectures replace full self-attention, which has O(N2)O(N^2) complexity, with bipartite or bottlenecked attention via anchors.

  • Anchor-to-Anchor Self-Attention (AASA): Anchors undergo multi-head self-attention among themselves, capturing global information efficiently. For AGFormer:

Attnanchor(P)=softmax(Qp(Kp)T/d)Vp\text{Attn}_{\rm anchor}(P) = \text{softmax}\left( Q^p (K^p)^T / \sqrt{d'} \right) V^p

with QpQ^p, KpK^p, VpV^p as linear projections of PP.

  • Anchor-to-Node/Feature Cross-Attention (ANCA/APAttn): The refined anchor representations communicate back to all original nodes/tokens:

Attncross(H,P~)=softmax(Qh(Kp~)T/d)Vp~\text{Attn}_{\rm cross}(H, \tilde{P}) = \text{softmax}\left( Q^h (K^{\tilde{p}})^T / \sqrt{d'} \right) V^{\tilde{p}}

Similar mechanisms exist in AMatFormer and AcFormer, updating primal features with anchor information (Jiang et al., 2023, Liu et al., 28 May 2024).

  • Anchor-based Markov Walk (Vision Transformers): AnchorFormer (Shan et al., 22 May 2025) introduces a differentiable bipartite attention:

Ai,j=softmaxj(wjSkiT/d)A_{i,j} = \text{softmax}_j(w^S_j k_i^T / \sqrt{d})

and recovers an approximate global self-attention via a two-step Markov transition:

St=AΔ1ATS_t = A \Delta^{-1} A^T

where ARn×mA \in \mathbb{R}^{n \times m} connects tokens and anchors, Δ\Delta is a diagonal matrix with column-sums of AA.

3. Computational Complexity and Scaling

AnchorFormer architectures reduce the cost of global attention mechanisms by constraining message passing to anchor sets.

  • Graphs: AGFormer’s time and memory cost becomes O(M2d+NMd)O(M^2 d' + N M d') (with MNM \ll N), yielding empirical 2× speedups over full GraphTrans on N=1000N=1000 graphs (Jiang et al., 2023).
  • Vision Transformers: Replacing O(n2d)O(n^2 d) full attention with O(nmd)O(n m d) anchor attention. AnchorFormer achieves 41–47% FLOPs reduction and up to 9% accuracy improvement over DeiT baselines at comparable depth/width (Shan et al., 22 May 2025).
  • Feature Matching: AMatFormer’s attention is O(nkc)O(n k c), compared with O(n2c)O(n^2 c) in SuperGlue. For n=1000n=1000, k=128k=128, yields 4.75G FLOPs vs. SuperGlue’s 24.5G (Jiang et al., 2023).
  • Multimodal Connector (AcFormer): Visual token length reduced from N=577N=577 to TN=145T_N=145, yielding measured 2.2× speedup in wall-clock throughput on LLaVA-style MLLMs (Liu et al., 28 May 2024).

4. Robustness, Consensus, and Information Flow

Anchor-based models exhibit increased robustness to noise and facilitate consensus across modalities:

  • Graph Noise Filtering: AGFormer’s anchor pooling corresponds to a low-rank approximation, filtering high-frequency edge noise. Empirically, AGFormer accuracy degrades only 5% under 20% edge-flip noise, while GraphTrans sees a 12% drop (Jiang et al., 2023).
  • Consensus Feature Matching: AMatFormer’s shared FFN ensures that features from both images are mapped into a shared domain, yielding stable metric learning (Jiang et al., 2023). Removing the shared FFN degrades performance; learned bilinear metrics outperform cosine similarity in ablation studies.
  • Visual Anchors as Aggregators: AcFormer exploits the concentration of information flow in select ViT tokens, improving VQA and multimodal accuracy at a substantially reduced token count (Liu et al., 28 May 2024).

5. Architectural Variants and Workflow Summaries

The core AnchorFormer workflow varies with application:

Variant Domain Anchor Selection Attention Mechanism
AGFormer Graph representation Louvain clustering (community) AASA + ANCA (anchors ↔ nodes)
AMatFormer Feature matching Top-kk seed match (NN + ratio) Anchor self-/cross-attention
AnchorFormer Vision Transformer Learned anchor neurons Bipartite (Markov) attention
AcFormer MLLM connector High-attention ViT tokens Selective transformer module

This organization reflects progression from data-driven anchor selection (clustering/matching), through architectural bottleneck design (self-/cross-attention via anchors), to end-to-end learning via standard optimization.

6. Empirical Performance and Ablation Studies

Empirical results consistently demonstrate high efficiency and competitive or improved accuracy.

  • AGFormer: Top-3 classification accuracy on five graph benchmarks; Louvain anchor selection remains crucial, as random anchors reduce accuracy by 1–4 points; time efficiency is doubled on large synthetic graphs (Jiang et al., 2023).
  • AMatFormer: Outperforms SGMNet and matches SuperGlue on Scannet, FM-Bench, YFCC100M; optimal anchor count k=128k=128, with ablations confirming benefits of anchors, shared FFN, and cross-anchor attention (Jiang et al., 2023).
  • AnchorFormer (ViT): On ImageNet, achieves 3.8–9.0% higher top-1 accuracy versus DeiT and 40–46% FLOPs savings over BiFormer, CastlingViT; ablations show differentiable anchors outperform non-differentiable and vanilla variants (Shan et al., 22 May 2025).
  • AcFormer: Matches or exceeds LLaVA baselines on VQA, GQA, POPE, MMbench, with a 1.6–2.3× speedup; ablations reveal pooling/Perceiver-based resampling is consistently worse than anchor selection (Liu et al., 28 May 2024).

7. Limitations, Scalability, and Outlook

Current limitations include sensitivity to anchor selection heuristics and need for further theoretical analysis:

  • Image Matching: AMatFormer performance may degrade under low image overlap or severe illumination changes, due to poor initial anchor matches (Jiang et al., 2023).
  • Visual Anchors: AcFormer’s extraction relies on [CLS] attention; the theoretical grounding for anchor emergence in ViTs is not fully resolved (Liu et al., 28 May 2024).
  • Scaling Behavior: Anchor number mm must be carefully tuned: too few anchors reduce receptive field, too many raise FLOPs and risk redundancy (Shan et al., 22 May 2025).
  • Future Directions: Proposals include introducing “positive-incentive noise” theory for anchor attention approximation quality and expanding anchor-based architectures to higher-resolution or multi-scale contexts (Shan et al., 22 May 2025).

A plausible implication is that anchor-based models offer a principled framework for efficient information aggregation in deep learning, especially as data and model sizes scale, given their empirical robustness and computational benefits across modalities.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to AnchorFormer.