Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StableMask: Refining Causal Masking in Decoder-only Transformer (2402.04779v1)

Published 7 Feb 2024 in cs.CL and cs.AI

Abstract: The decoder-only Transformer architecture with causal masking and relative position encoding (RPE) has become the de facto choice in LLMing. Despite its exceptional performance across various tasks, we have identified two limitations: First, it requires all attention scores to be non-zero and sum up to 1, even if the current embedding has sufficient self-contained information. This compels the model to assign disproportional excessive attention to specific tokens. Second, RPE-based Transformers are not universal approximators due to their limited capacity at encoding absolute positional information, which limits their application in position-critical tasks. In this work, we propose StableMask: a parameter-free method to address both limitations by refining the causal mask. It introduces pseudo-attention values to balance attention distributions and encodes absolute positional information via a progressively decreasing mask ratio. StableMask's effectiveness is validated both theoretically and empirically, showing significant enhancements in LLMs with parameter sizes ranging from 71M to 1.4B across diverse datasets and encoding methods. We further show that it naturally supports (1) efficient extrapolation without special tricks such as StreamingLLM and (2) easy integration with existing attention optimization techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qingyu Yin (44 papers)
  2. Xuzheng He (6 papers)
  3. Xiang Zhuang (10 papers)
  4. Yu Zhao (207 papers)
  5. Jianhua Yao (50 papers)
  6. Xiaoyu Shen (73 papers)
  7. Qiang Zhang (466 papers)
Citations (5)