Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sampling Foundational Transformer: A Theoretical Perspective (2408.05822v2)

Published 11 Aug 2024 in cs.LG and cs.CV

Abstract: The versatility of self-attention mechanism earned transformers great success in almost all data modalities, with limitations on the quadratic complexity and difficulty of training. To apply transformers across different data modalities, practitioners have to make specific clever data-modality-dependent constructions. In this paper, we propose Sampling Foundational Transformer (SFT) that can work on multiple data modalities (e.g., point cloud, graph, and sequence) and constraints (e.g., rotational-invariant). The existence of such model is important as contemporary foundational modeling requires operability on multiple data sources. For efficiency on large number of tokens, our model relies on our context aware sampling-without-replacement mechanism for both linear asymptotic computational complexity and real inference time gain. For efficiency, we rely on our newly discovered pseudoconvex formulation of transformer layer to increase model's convergence rate. As a model working on multiple data modalities, SFT has achieved competitive results on many benchmarks, while being faster in inference, compared to other very specialized models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Viet Anh Nguyen (60 papers)
  2. Minh Lenhat (2 papers)
  3. Khoa Nguyen (34 papers)
  4. Duong Duc Hieu (2 papers)
  5. Dao Huu Hung (2 papers)
  6. Truong Son Hy (28 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets