Attention-Induced Curvature
- Attention-induced curvature is a concept where Transformer attention heads learn and adjust latent space geometry by estimating curvature parameters.
- It employs stereographic projections, Möbius addition, and parallel transport to map embeddings into constant-curvature spaces, optimizing representations for hierarchical graph data.
- The FPS-T model utilizes a kernelized linear-time mechanism for non-Euclidean attention, achieving superior graph reconstruction and node classification with enhanced parameter efficiency.
Attention-induced curvature describes the phenomenon whereby the geometry of the latent representation space—specifically its curvature—is directly controlled and learned through attention mechanisms within neural architectures. This concept is central to the Fully Product-Stereographic Transformer (FPS-T), which operates all Transformer layers over a product of constant-curvature spaces. In this context, each attention head dynamically selects the appropriate geometry (spherical, Euclidean, or hyperbolic) by learning a curvature parameter κₕ during training. This enables the model to adapt its geometric inductive bias to better encode hierarchical or cyclical structures in graph data, leading to more compact and effective representations, especially as evidenced in tasks such as graph reconstruction and node classification.
1. Geometric Foundations: Product of Constant-Curvature Spaces
FPS-T generalizes the standard Transformer architecture by embedding query, key, and value representations not in the ordinary Euclidean vector space ℝᵈ, but in the product of multiple constant-curvature spaces (𝔖ᵈ_{κₕ}). Each such space is defined by a curvature parameter κ and a stereographic chart:
- The κ-stereographic model:
- The conformal metric tensor: with
- Möbius addition, the group operation on these manifolds:
The full representation space is a product-manifold , providing independent curvature parameters for each of the attention heads. The geodesic distance in this product manifold is
where , generalizing Euclidean distance.
2. Attention on Tangent Spaces and Non-Euclidean Aggregation
Conventional non-Euclidean graph neural networks have often parameterized attention through explicit pairwise geodesic distances (e.g., ), but FPS-T instead generalizes the scaled dot-product attention by explicitly mapping embeddings into tangent spaces:
- Queries and keys , are computed as elements of the tangent space at the respective values , via stereographic linear layers (i.e., ).
- Parallel transport is used so that all tangent vectors are referenced at a common basepoint :
- At the origin, the metric is conformally Euclidean, allowing the use of standard inner products:
- The aggregation is realized via the Einstein midpoint on , using Möbius operations and conformal scaling.
This framework preserves the Transformer’s global and flexible attention capabilities while enabling each head to interpret geometric structure in a manner best suited to the observed graph.
3. Kernelized Linear-Time Non-Euclidean Attention
Standard attention mechanisms have quadratic time and memory complexity due to pairwise calculations. FPS-T circumvents this via a kernelized, linear-attention trick:
- The tangent-space dot-product is approximated by a positive kernel so that
- Additional scaling incorporates the conformal factors from the stereographic model.
- The aggregation operation then factorizes double sums into matrix-vector products, reducing complexity from to per head, where and are the numbers of nodes and edges in the graph (TokenGT tokens).
The table below summarizes computational complexity per attention head:
| Attention Type | Complexity | Notes |
|---|---|---|
| Exact | Full pairwise calculations | |
| Kernelized (Linear) | Via kernel feature trick |
This approach allows FPS-T to scale to much larger graphs without sacrificing the geometric adaptivity conferred by attention-induced curvature.
4. End-to-End Learning of Curvature
Each head’s curvature is treated as a learnable parameter:
- Initial value (Euclidean)
- All geometric operations (stereographic maps, Möbius addition, , , , , and parallel transport) are smooth in and support backpropagation.
- Gradients are propagated through attention and feedforward computations, jointly optimized (typical learning rates: for curvature, for other weights with Adam optimizer).
As training proceeds, learns to select hyperbolic (), Euclidean (), or spherical () geometry as required by graph structure. For example, on the Web-Edu dataset (sectional curvature ), shifted from 0 to over training, supporting accurate embedding of hierarchical relationships.
A plausible implication is that models without curvature learning may be suboptimal when representing complex graph geometries, particularly in cases where oversmoothing and oversquashing are exacerbated by traditional message-passing designs.
5. Empirical Results and Parameter Efficiency
FPS-T demonstrates empirical advantages in both expressive power and efficiency:
- On graph reconstruction and node classification benchmarks, FPS-T with learned curvature consistently outperforms fixed-Euclidean baselines, especially notable in settings with strong non-Euclidean graph structure.
- In low-dimensional settings (feature dimension 4 vs. 16), FPS-T matches or exceeds the expressiveness of full-dimensional Euclidean Transformers while using only as many parameters, confirming that certain data manifolds benefit from intrinsic curvature.
- Across eight node classification benchmarks (homophily in ), FPS-T achieves leading performance on 6/8 datasets, with the greatest improvements on heterophilic graphs—structures that strongly deviate from plain Euclidean assumptions.
- On the Web-Edu dataset, model performance (mAP) increased in tandem with curvature adaptation, whereas fixed-curvature models could not match this progression.
These findings support the conclusion that attention-induced curvature materially sharpens attention patterns and enhances predictive accuracy, particularly in challenging graph settings.
6. Implementation, Tokenization, and Practical Considerations
FPS-T is implemented using PyTorch, PyG (PyTorch Geometric), and Geoopt, leveraging their support for manifold-valued tensors and geometric optimization:
- The graph is tokenized à la TokenGT into tokens (nodes plus edges); positional encoding is via Laplacian eigenvectors, and two token-type embeddings distinguish node versus edge tokens. Edge features encode only type and position.
- Typical model depth is 1–3 layers, 1–4 attention heads, embedding dimension of 16, and standard regularization (dropout, weight decay) as tuned per dataset.
- Kernelized attention yields linear complexity with respect to the number of nodes and edges; exact attention is retained for smaller or more tractable graphs.
- No manual search over curvature initializations is required—a significant practical advantage compared to previous non-Euclidean networks.
- Present limitations include cubic time complexity in ambient dimension for certain geometric operations (parallel transport, log/exp maps), and numerical instability as . Future work may seek heterogeneous or input-dependent manifold structures for further adaptation.
By allowing the geometry of each attention head’s latent space to adapt to the observed data (attention-induced curvature), FPS-T achieves state-of-the-art performance and parameter efficiency in global-attention graph representation learning, without abandoning the strengths of Transformer architectures.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free