Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequence Length Scaling in Vision Transformers for Scientific Images on Frontier (2405.15780v1)

Published 17 Apr 2024 in cs.CV and cs.LG

Abstract: Vision Transformers (ViTs) are pivotal for foundational models in scientific imagery, including Earth science applications, due to their capability to process large sequence lengths. While transformers for text has inspired scaling sequence lengths in ViTs, yet adapting these for ViTs introduces unique challenges. We develop distributed sequence parallelism for ViTs, enabling them to handle up to 1M tokens. Our approach, leveraging DeepSpeed-Ulysses and Long-Sequence-Segmentation with model sharding, is the first to apply sequence parallelism in ViT training, achieving a 94% batch scaling efficiency on 2,048 AMD-MI250X GPUs. Evaluating sequence parallelism in ViTs, particularly in models up to 10B parameters, highlighted substantial bottlenecks. We countered these with hybrid sequence, pipeline, tensor parallelism, and flash attention strategies, to scale beyond single GPU memory limits. Our method significantly enhances climate modeling accuracy by 20% in temperature predictions, marking the first training of a transformer model on a full-attention matrix over 188K sequence length.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Aristeidis Tsaris (16 papers)
  2. Chengming Zhang (19 papers)
  3. Xiao Wang (507 papers)
  4. Junqi Yin (30 papers)
  5. Siyan Liu (13 papers)
  6. Moetasim Ashfaq (3 papers)
  7. Ming Fan (32 papers)
  8. Jong Youl Choi (12 papers)
  9. Mohamed Wahib (38 papers)
  10. Dan Lu (30 papers)
  11. Prasanna Balaprakash (91 papers)
  12. Feiyi Wang (13 papers)