Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models (2106.05505v1)

Published 10 Jun 2021 in cs.CL

Abstract: In this paper, we detail the relationship between convolutions and self-attention in natural language tasks. We show that relative position embeddings in self-attention layers are equivalent to recently-proposed dynamic lightweight convolutions, and we consider multiple new ways of integrating convolutions into Transformer self-attention. Specifically, we propose composite attention, which unites previous relative position embedding methods under a convolutional framework. We conduct experiments by training BERT with composite attention, finding that convolutions consistently improve performance on multiple downstream tasks, replacing absolute position embeddings. To inform future work, we present results comparing lightweight convolutions, dynamic convolutions, and depthwise-separable convolutions in LLM pre-training, considering multiple injection points for convolutions in self-attention layers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tyler A. Chang (17 papers)
  2. Yifan Xu (92 papers)
  3. Weijian Xu (12 papers)
  4. Zhuowen Tu (80 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.