Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When can transformers compositionally generalize in-context? (2407.12275v1)

Published 17 Jul 2024 in cs.LG and cs.NE

Abstract: Many tasks can be composed from a few independent components. This gives rise to a combinatorial explosion of possible tasks, only some of which might be encountered during training. Under what circumstances can transformers compositionally generalize from a subset of tasks to all possible combinations of tasks that share similar components? Here we study a modular multitask setting that allows us to precisely control compositional structure in the data generation process. We present evidence that transformers learning in-context struggle to generalize compositionally on this task despite being in principle expressive enough to do so. Compositional generalization becomes possible only when introducing a bottleneck that enforces an explicit separation between task inference and task execution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Seijin Kobayashi (16 papers)
  2. Simon Schug (8 papers)
  3. Yassir Akram (7 papers)
  4. Florian Redhardt (1 paper)
  5. Johannes von Oswald (21 papers)
  6. Razvan Pascanu (138 papers)
  7. Guillaume Lajoie (58 papers)
  8. João Sacramento (27 papers)

Summary

We haven't generated a summary for this paper yet.