Papers
Topics
Authors
Recent
2000 character limit reached

FuXi-$γ$: Efficient Sequential Recommendation with Exponential-Power Temporal Encoder and Diagonal-Sparse Positional Mechanism (2512.12740v1)

Published 14 Dec 2025 in cs.IR

Abstract: Sequential recommendation aims to model users' evolving preferences based on their historical interactions. Recent advances leverage Transformer-based architectures to capture global dependencies, but existing methods often suffer from high computational overhead, primarily due to discontinuous memory access in temporal encoding and dense attention over long sequences. To address these limitations, we propose FuXi-$γ$, a novel sequential recommendation framework that improves both effectiveness and efficiency through principled architectural design. FuXi-$γ$ adopts a decoder-only Transformer structure and introduces two key innovations: (1) An exponential-power temporal encoder that encodes relative temporal intervals using a tunable exponential decay function inspired by the Ebbinghaus forgetting curve. This encoder enables flexible modeling of both short-term and long-term preferences while maintaining high efficiency through continuous memory access and pure matrix operations. (2) A diagonal-sparse positional mechanism that prunes low-contribution attention blocks using a diagonal-sliding strategy guided by the persymmetry of Toeplitz matrix. Extensive experiments on four real-world datasets demonstrate that FuXi-$γ$ achieves state-of-the-art performance in recommendation quality, while accelerating training by up to 4.74$\times$ and inference by up to 6.18$\times$, making it a practical and scalable solution for long-sequence recommendation. Our code is available at https://github.com/Yeedzhi/FuXi-gamma.

Summary

  • The paper introduces FuXi-γ, a novel framework that uses exponential-power temporal encoding to model user preference decay.
  • It employs a diagonal-sparse positional mechanism to prune redundant attention, reducing computational overhead up to 74.56%.
  • Empirical results show significant improvements in recommendation accuracy and speed, with up to 6.18x faster inference compared to baselines.

FuXi-γ\gamma: An Efficient Framework for Sequential Recommendation with Exponential-Power Temporal Encoding and Diagonal-Sparse Positional Mechanism

Introduction

Sequential recommendation aims to model user interests evolving over time by leveraging sequential interactions, with accuracy and computational efficiency paramount for real-world deployment. State-of-the-art Transformer-based sequential recommenders increasingly leverage generative, autoregressive architectures but often suffer from inefficiency due to irregular memory access in temporal encoding and the quadratic complexity of dense attention, especially on long sequences. The FuXi-γ\gamma framework proposes two principal innovations to address these challenges: (1) an exponential-power temporal encoder operationalized through a tunable exponential decay inspired by the Ebbinghaus forgetting curve, and (2) a diagonal-sparse positional mechanism that prunes superfluous attention via semi-structured block sparsity. Together, these innovations yield improved recommendation quality and enable substantial speedups during both training and inference. Figure 1

Figure 1: Overall architecture of FuXi-γ\gamma highlighting the dual-channel structure and its integration of exponential-power temporal encoding and diagonal-sparse positional pruning.

Methodology

Exponential-Power Temporal Encoder

Temporal dynamics are essential for reflecting user preference decay. Traditional bucket-based temporal encoders (e.g., T5-style log-binning) suffer from discontinuous, non-contiguous memory access and lead to significant computation bottlenecks. FuXi-γ\gamma replaces these with a fully matrix-based approach:

Let Ti,jT^{i,j} be the absolute time difference between items ii and jj, the temporal attention is computed as:

Atsi,j=αγtitjβA_{ts}^{i,j} = \alpha \cdot \gamma^{|t_i - t_j|^\beta}

where γ(0,1)\gamma \in (0,1) is a decay hyperparameter controlling the rate of interest attenuation, and α,β\alpha, \beta are learnable parameters governing intensity and nonlinearity, respectively. This parameterization can smoothly interpolate between aggressively short-range and broad long-term memory, matching behavioral patterns across domains.

A significant hardware-aligned optimization is made via explicit float32 pre-conversion of the temporal distance matrix, thereby avoiding implicit type casts and improving synchronization with hardware accelerators. This results in an additional 12–15% runtime speedup and reduced memory footprint. Figure 2

Figure 2

Figure 2: Efficiency comparison of temporal encoders under varying sequence lengths and batch sizes; exponential-power encoding achieves lowest latency and best hardware compatibility.

Diagonal-Sparse Positional Mechanism

While temporal and absolute positional encodings are present, relative positional encodings still provide complementary order information. Naive incorporation, however, introduces dense attention matrices of O(n2)O(n^2) complexity, with high redundancy in typical long-sequence recommendation. FuXi-γ\gamma addresses this by:

  1. Block-level division: Partition the positional attention map into s×ss \times s blocks.
  2. Importance scoring: Using the persymmetry (Toeplitz) property, only blocks along the leftmost column are scored, with absolute weight sums serving as proxies for block utility.
  3. Diagonal-sliding selection: The least important diagonals are pruned, and this mask is propagated across the whole attention map. Figure 3

    Figure 3: Illustration of diagonal-sparse positional mechanism for sequence length n=8n=8, stride s=2s=2, and pruning ratio τ=50%\tau=50\%.

This diagonal-sparse pattern is both hardware-friendly and effective—pruning up to 74.56% of positional computation at minimal impact to accuracy.

Empirical Results

Recommendation Accuracy

Comprehensive evaluation on four datasets (MovieLens-1M, MovieLens-20M, KuaiRand, and a large-scale industrial music dataset) validates FuXi-γ\gamma's superiority:

  • On the industrial dataset, FuXi-γ\gamma achieves 25.06% HR@10 and 42.86% NDCG@10 improvements over the strongest autoregressive baseline.
  • Under 8-layer models, FuXi-γ\gamma surpasses competitive methods by a margin of 3.79% in HR@10 and 4.46% in NDCG@10, with enhanced results under deeper scaling.

Computational Efficiency

  • FuXi-γ\gamma exhibits 4.74×\times (training) and 6.18×\times (inference) speedup over comparable autoregressive models on long sequences.
  • Efficiency advantages scale with sequence length due to streamlined architectures and elimination of bucket-based memory fragmentation. Figure 4

Figure 4

Figure 4: Overall efficiency performance comparison, showing FuXi-γ\gamma's consistent improvement as sequence length increases.

Temporal Encoder Analysis

The exponential-power temporal encoder yields:

  • 11×\times speedup vs. bucket-based encoders at sequence length 1000.
  • Markedly improved learning of both short-term and long-term dependencies, as shown by smoother, cognitively-consonant temporal decay (supported quantitatively in ablation). Figure 5

Figure 5

Figure 5: Visualization comparison of temporal encoders, revealing continuous and flexible decay in FuXi-γ\gamma over previous bucket and inverse-proportion approaches.

Ablation demonstrates the critical contribution of the temporal encoder, with the model's performance significantly degrading when this component is removed.

Robustness & Generalization

  • FuXi-γ\gamma demonstrates resilience for cold-start users, fresh items, and long-tail distributions.
  • Diagonal-sparse mechanism allows up to 60% block pruning with retention of >98.9% accuracy; FLOPs are proportionally reduced. Figure 6

Figure 6

Figure 6: Impact of pruning ratio τ\tau on ML-20M, showing minor accuracy loss and major FLOP reduction as sparsity increases.

Implications and Future Directions

FuXi-γ\gamma presents several substantial implications for the sequential recommendation landscape:

  • Hardware-Aligned AI: By constraining computation to pure matrix operations and structured sparsity, the framework sets a new standard for hardware utilization and energy/performance efficiency.
  • Modeling Flexibility: The tunable exponential-power kernel adapts to varying temporal patterns across domains, broadening the applicability of autoregressive recommenders.
  • Scalable, Practical Deployment: The ability to retain state-of-the-art accuracy under aggressive pruning regimes enables cost-effective deployment in latency-critical settings.

Potential future research avenues include expanding the model to encompass multi-behavior and cross-domain histories, investigating further structured pruning in additional channels, and aligning the diagonal-sparse paradigm with emerging efficient attention mechanisms.

Conclusion

FuXi-γ\gamma substantially advances sequential recommendation by coupling cognitive-theory-inspired temporal encoding with a rigorously hardware-optimized, diagonal-sparse positional mechanism. Both theoretically and empirically, the framework achieves strong gains in recommendation effectiveness and efficiency. These architectural innovations lay a promising foundation for future work in deployable, scalable, and cognitively-plausible sequential recommender systems.

Reference: "FuXi-γ\gamma: Efficient Sequential Recommendation with Exponential-Power Temporal Encoder and Diagonal-Sparse Positional Mechanism" (2512.12740).

Whiteboard

Paper to Video (Beta)

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 5 likes about this paper.