Skip-Layer Attention: Bridging Abstract and Detailed Dependencies in Transformers (2406.11274v1)
Abstract: The Transformer architecture has significantly advanced deep learning, particularly in natural language processing, by effectively managing long-range dependencies. However, as the demand for understanding complex relationships grows, refining the Transformer's architecture becomes critical. This paper introduces Skip-Layer Attention (SLA) to enhance Transformer models by enabling direct attention between non-adjacent layers. This method improves the model's ability to capture dependencies between high-level abstract features and low-level details. By facilitating direct attention between these diverse feature levels, our approach overcomes the limitations of current Transformers, which often rely on suboptimal intra-layer attention. Our implementation extends the Transformer's functionality by enabling queries in a given layer to interact with keys and values from both the current layer and one preceding layer, thus enhancing the diversity of multi-head attention without additional computational burden. Extensive experiments demonstrate that our enhanced Transformer model achieves superior performance in LLMing tasks, highlighting the effectiveness of our skip-layer attention mechanism.
- Qian Chen (264 papers)
- Wen Wang (144 papers)
- Qinglin Zhang (30 papers)
- Siqi Zheng (61 papers)
- Shiliang Zhang (132 papers)
- Chong Deng (22 papers)
- Hai Yu (40 papers)
- Jiaqing Liu (20 papers)
- Yukun Ma (33 papers)
- Chong Zhang (137 papers)