Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convolution-enhanced Evolving Attention Networks (2212.08330v2)

Published 16 Dec 2022 in cs.LG, cs.CL, cs.CV, and cs.NE

Abstract: Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations , wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yujing Wang (53 papers)
  2. Yaming Yang (39 papers)
  3. Zhuo Li (164 papers)
  4. Jiangang Bai (4 papers)
  5. Mingliang Zhang (17 papers)
  6. Xiangtai Li (128 papers)
  7. Jing Yu (99 papers)
  8. Ce Zhang (215 papers)
  9. Gao Huang (178 papers)
  10. Yunhai Tong (69 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com