Papers
Topics
Authors
Recent
Search
2000 character limit reached

2D-RoPE: Rotational Encoding for 2D Data

Updated 5 March 2026
  • 2D-RoPE is a positional encoding method that generalizes rotary encoding to two-dimensional and spherical data by applying block-diagonal rotation matrices to token embeddings.
  • It leverages axial decomposition and compositional rotations to precisely encode relative positions, ensuring the attention mechanism captures spatial relationships with minimal computational overhead.
  • Empirical studies show that 2D-RoPE improves performance in vision, geospatial modeling, and agent-based tasks, while variants like Spiral RoPE and GeoPE further enhance directional and geometric expressiveness.

2D Rotary Positional Embeddings (2D-RoPE) generalize the rotary positional encoding paradigm from one-dimensional token sequences to inputs with intrinsic two-dimensional or even spherical geometry. This extension enables Transformers to represent relative positions or angular separations directly within their key/query projections, thereby aligning the neural attention mechanism with the spatial or geospatial structure of input data such as images, spatial agent states, or geo-located observations.

1. Mathematical Foundations and Construction

The canonical 2D-RoPE formulates positional encoding as a series of axis-aligned and/or jointly parameterized planar rotations in embedding space. For vision or spatial tasks, the most common implementation assigns each input patch, agent, or geotoken a coordinate (x,y)(x, y) (Cartesian grid or, for geospatial data, latitude φ\varphi and longitude λ\lambda). The token embedding xRdx \in \mathbb{R}^d is partitioned into $2$ (or generally BB) equal subspaces, each of which is operated on by a position-dependent block-diagonal rotation matrix.

  • Axial decomposition (Cartesian grids): For even dd, the embedding is split into xx- and yy-halves. Each half is further subdivided into d/4d/4 pairs, and for patch coordinates (x,y)(x,y), each (q2i,q2i+1)(q_{2i},q_{2i+1}) subvector is rotated by xθix \theta_i or yθiy \theta_i, with frequency schedule θi=100002(i1)/(d/2)\theta_i=10000^{-2(i-1)/(d/2)}:

Ri(x)(x)=(cos(xθi)sin(xθi) sin(xθi)cos(xθi))R^{(x)}_{i}(x) = \begin{pmatrix} \cos(x\,\theta_i) & -\sin(x\,\theta_i) \ \sin(x\,\theta_i) & \cos(x\,\theta_i) \end{pmatrix}

and similarly for Ri(y)(y)R^{(y)}_{i}(y) (Heo et al., 2024, Zivanovic et al., 26 May 2025, Ostmeier et al., 2024).

  • Composition: The full rotation is the product R2D(x,y)=Rx(x)Ry(y)R_{2D}(x, y) = R_x(x)\,R_y(y), leveraging the fact that these block-diagonal operators commute.
  • Spherical/geographic (Geotransformers): For spherical data, each geotoken position (φ,λ)(\varphi, \lambda) is mapped by composition of a latitude xx-axis tilt and longitude zz-axis sweep:

R(φ,λ)=Rx(φ)Rz(λ)R(\varphi, \lambda) = R_x(\varphi) R_z(\lambda)

where RxR_x and RzR_z are 3×33\times 3 rotation matrices as per SO(3) conventions; dd-dimensional embeddings are formed by block-diagonal stacking over d/3d/3 blocks (Unlu, 2024).

2. Implementation in Transformer Architectures

Integrating 2D-RoPE requires minimal architectural change. At each layer, token representations are projected to queries and keys. Rotary position encoding is applied by multiplying these vectors by the appropriate block-diagonal rotation as determined by the token's coordinates.

  • Pseudocode (Canonical Linear Grid):

    • Split each embedding into axis-wise subspaces.
    • For each token, precompute sine/cosine phases for each axis and frequency.
    • Apply blockwise rotations to each subvector pair as dictated by its positional scalar.
    • Use the rotated Q/K in the attention computation:

    Qi=R2D(xi,yi)Qi Kj=R2D(xj,yj)Kj Attention(i,j)=QiKjQ'_i = R_{2D}(x_i, y_i) Q_i \ K'_j = R_{2D}(x_j, y_j) K_j\ \text{Attention}(i, j) = Q'_i \cdot K'_j

  • Spherical RoPE (Geotransformer): Each dd-vector is split into d/3d/3 blocks, each transformed by R(φ,λ)R(\varphi, \lambda) (Unlu, 2024).
  • Continuous and Non-Integral Coordinates: 2D-RoPE extends directly to continuous-valued positions, essential for applications in irregular grids or agent-based modeling (Zivanovic et al., 26 May 2025, Zhao et al., 19 Mar 2025).

3. Theoretical Properties and Relative Position Encoding

2D-RoPE ensures that, due to the group-structure of rotation matrices, the attention dot-product depends only on relative positional differences:

$Q'_i^\top K'_j = Q_i^\top R_{2D}(x_i, y_i)^T R_{2D}(x_j, y_j) K_j$

Given the commutative structure, R2D(xj,yj)R2D(xi,yi)T=R2D(xjxi,yjyi)R_{2D}(x_j, y_j) R_{2D}(x_i, y_i)^T = R_{2D}(x_j - x_i, y_j - y_i), guaranteeing strict relative positional encoding.

For spherical data, the inner product's dependence on geodesic separation Δ\Delta follows from the orthonormal property of R(d)(φ,λ)R^{(d)}(\varphi, \lambda) (Unlu, 2024).

  • No Parameter/Memory Blowup: The per-token overhead is O(d)O(d); total memory grows linearly in sequence length, matching the vanilla Transformer.

4. Extensions and Variants

2D-RoPE admits several generalizations to enhance geometric fidelity or directional expressiveness:

Variant Embedding Block Geometry Captured
Axial RoPE 2×22\times2 (per axis) Axis-aligned displacements
RoPE-Mixed 2×22\times2 (per-axis, learnable rates) Oblique directions, via learnable frequency pairs (Heo et al., 2024)
Spiral RoPE Multi-directional Oblique directions, via G-way directional split and projection (Liu et al., 3 Feb 2026)
GeoPE 3×33\times3 quaternionic Symmetric Euclidean 2D rotations (commutative, shape-aware) (Yao et al., 4 Dec 2025)
SO(3) RoPE 3×33\times3 (spherical) Spherical geometry (e.g., Earth's surface) (Unlu, 2024)
DRoPE 2×22\times2 for direction Periodic angular information for headings (Zhao et al., 19 Mar 2025)
LieRE Higher so(dd) blocks Arbitrary dimension/algebraic coupling (Ostmeier et al., 2024)

Spiral RoPE partitions the embedding into GG groups, each encoding displacements along a direction uniformly sampled on the circle, thus covering all spatial Fourier directions and resolving the axis-alignment limitation of plain 2D-RoPE. GeoPE constructs a symmetric 3×33\times3 rotation in quaternion space, using the Lie algebraic mean to ensure isotropy with respect to height/width, eliminating sequential proximity artifacts (Yao et al., 4 Dec 2025).

5. Empirical Evaluation and Application Domains

2D-RoPE is widely adopted in computer vision, geospatial modeling, time-series, and agent interaction tasks. Standard axial 2D-RoPE consistently outperforms 1D RoPE or absolute positional encoding in image classification (ImageNet-1k), object detection (COCO), and segmentation (ADE-20k), with observed gains of 1–2% top-1 accuracy and 1–2 mIoU or AP points (Heo et al., 2024, Liu et al., 3 Feb 2026, Yao et al., 4 Dec 2025). Spiral RoPE and GeoPE further enhance performance, particularly at high resolution and for tasks requiring geometric locality or orientation awareness, with additional improvements up to ~2–3% absolute (Liu et al., 3 Feb 2026, Yao et al., 4 Dec 2025).

In geospatial transformers, spherical RoPE supports predictive learning of real-world great-circle distances, with models converging 2–3× faster and to lower loss when true coordinates are encoded (Unlu, 2024). For agent-centric trajectory models, DRoPE offers competitive minADE and realism metrics without incurring quadratic memory growth (Zhao et al., 19 Mar 2025).

6. Practical Considerations, Computational Overhead, and Limitations

The computational overhead of 2D-RoPE is modest—approximately double that of 1D-RoPE, as independent or joint rotations must be computed per coordinate axis or direction. In practice, this cost is negligible in standard Transformer workloads. No O(N2)O(N^2) intermediate pairwise tensors are required, unlike classical relative position encoding (RPE) approaches.

Accurate geometric coupling is nontrivial: naive axis decomposition cannot distinguish between spatially distant tokens on adjacent rows (a "false neighbor" effect). Geometric-coupled embeddings (GeoPE, Spiral RoPE) are superior in preserving the 2D manifold and resolving such artifacts (Yao et al., 4 Dec 2025, Liu et al., 3 Feb 2026).

Limitations include reduced expressivity for non-Euclidean domains unless the appropriate generalization (e.g., quaternionic or manifold-based rotations) is used. In spherical applications, metric scaling (radian vs. distance) remains an open area for fine-tuning (Unlu, 2024).

7. Summary and Future Directions

2D-RoPE constitutes a principled and empirically validated approach to injecting spatial or geometric inductive bias into Transformer models. Explicitly using block-diagonal rotation matrices parameterized by spatial, spherical, or directional variables allows for exact relative positional encoding, superior extrapolation, and enhanced geometric faithfulness compared to both absolute and 1D positional encodings.

Recent trends include leveraging Lie-theoretic constructions for full high-dimensional coupling (LieRE), extending to continuous and irregular domains, and increasing directionality/flexibility (Spiral RoPE, GeoPE). Empirical results underline the value of geometric positional encoding in both standard vision and emerging spatial/temporal applications. Ongoing developments focus on better integration with manifold data, learnable frequency parameterizations, and further reducing edge effects and artificial locality biases (Yao et al., 4 Dec 2025, Ostmeier et al., 2024, Liu et al., 3 Feb 2026, Unlu, 2024, Heo et al., 2024, Zivanovic et al., 26 May 2025, Zhao et al., 19 Mar 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to 2D Rotary Positional Embeddings (2D-RoPE).