Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Curious Case of Absolute Position Embeddings (2210.12574v1)

Published 23 Oct 2022 in cs.CL and cs.LG

Abstract: Transformer LLMs encode the notion of word order using positional information. Most commonly, this positional information is represented by absolute position embeddings (APEs), that are learned from the pretraining data. However, in natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been investigated. In this work, we observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information. Specifically, when models are subjected to sentences starting from a non-zero position (excluding the effect of priming), they exhibit noticeably degraded performance on zero to full-shot tasks, across a range of model families and model sizes. Our findings raise questions about the efficacy of APEs to model the relativity of position information, and invite further introspection on the sentence and word order processing strategies employed by these models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Koustuv Sinha (31 papers)
  2. Amirhossein Kazemnejad (6 papers)
  3. Siva Reddy (82 papers)
  4. Joelle Pineau (123 papers)
  5. Dieuwke Hupkes (49 papers)
  6. Adina Williams (72 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.