Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets (2205.09249v1)

Published 18 May 2022 in cs.CL, cs.AI, cs.CV, and cs.RO

Abstract: Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hyounghun Kim (15 papers)
  2. Aishwarya Padmakumar (17 papers)
  3. Di Jin (104 papers)
  4. Mohit Bansal (304 papers)
  5. Dilek Hakkani-Tur (94 papers)

Summary

We haven't generated a summary for this paper yet.