Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans (2110.07240v1)

Published 14 Oct 2021 in cs.CL

Abstract: Recursive processing is considered a haLLMark of human linguistic abilities. A recent study evaluated recursive processing in recurrent neural LLMs (RNN-LMs) and showed that such models perform below chance level on embedded dependencies within nested constructions -- a prototypical example of recursion in natural language. Here, we study if state-of-the-art Transformer LMs do any better. We test four different Transformer LMs on two different types of nested constructions, which differ in whether the embedded (inner) dependency is short or long range. We find that Transformers achieve near-perfect performance on short-range embedded dependencies, significantly better than previous results reported for RNN-LMs and humans. However, on long-range embedded dependencies, Transformers' performance sharply drops below chance level. Remarkably, the addition of only three words to the embedded dependency caused Transformers to fall from near-perfect to below-chance performance. Taken together, our results reveal Transformers' shortcoming when it comes to recursive, structure-based, processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yair Lakretz (17 papers)
  2. Théo Desbordes (5 papers)
  3. Dieuwke Hupkes (49 papers)
  4. Stanislas Dehaene (9 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.