Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Word Order Does Matter (And Shuffled Language Models Know It) (2203.10995v1)

Published 21 Mar 2022 in cs.CL

Abstract: Recent studies have shown that LLMs pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text. We probe these LLMs for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain information pertaining to the original, naturalistic word order. We show this is in part due to a subtlety in how shuffling is implemented in previous work -- before rather than after subword segmentation. Surprisingly, we find even LLMs trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities. Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vinit Ravishankar (11 papers)
  2. Mostafa Abdou (18 papers)
  3. Artur Kulmizev (11 papers)
  4. Anders Søgaard (122 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.