Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings (2106.03143v3)

Published 6 Jun 2021 in cs.LG, cs.CL, and cs.CV

Abstract: Without positional information, attention-based Transformer neural networks are permutation-invariant. Absolute or relative positional embeddings are the most popular ways to feed Transformer models with positional information. Absolute positional embeddings are simple to implement, but suffer from generalization issues when evaluating on sequences longer than seen at training time. Relative positions are more robust to input length change, but are more complex to implement and yield inferior model throughput due to extra computational and memory costs. In this paper, we propose an augmentation-based approach (CAPE) for absolute positional embeddings, which keeps the advantages of both absolute (simplicity and speed) and relative positional embeddings (better generalization). In addition, our empirical evaluation on state-of-the-art models in machine translation, image and speech recognition demonstrates that CAPE leads to better generalization performance as well as increased stability with respect to training hyper-parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tatiana Likhomanenko (41 papers)
  2. Qiantong Xu (26 papers)
  3. Gabriel Synnaeve (97 papers)
  4. Ronan Collobert (55 papers)
  5. Alex Rogozhnikov (6 papers)
Citations (50)

Summary

We haven't generated a summary for this paper yet.