Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Benchmark for Systematic Generalization in Grounded Language Understanding (2003.05161v2)

Published 11 Mar 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts ("greet the pink brontosaurus by the ferris wheel"). Modern neural networks, by contrast, struggle to interpret novel compositions. In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding. Going beyond a related benchmark that focused on syntactic aspects of generalization, gSCAN defines a language grounded in the states of a grid world, facilitating novel evaluations of acquiring linguistically motivated rules. For example, agents must understand how adjectives such as 'small' are interpreted relative to the current world state or how adverbs such as 'cautiously' combine with new verbs. We test a strong multi-modal baseline model and a state-of-the-art compositional method finding that, in most cases, they fail dramatically when generalization requires systematic compositional rules.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Laura Ruis (10 papers)
  2. Jacob Andreas (116 papers)
  3. Marco Baroni (58 papers)
  4. Diane Bouchacourt (32 papers)
  5. Brenden M. Lake (41 papers)
Citations (134)

Summary

We haven't generated a summary for this paper yet.