Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion (2108.04927v2)

Published 10 Aug 2021 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Language-guided robots performing home and office tasks must navigate in and interact with the world. Grounding language instructions against visual observations and actions to take in an environment is an open challenge. We present Embodied BERT (EmBERT), a transformer-based model which can attend to high-dimensional, multi-modal inputs across long temporal horizons for language-conditioned task completion. Additionally, we bridge the gap between successful object-centric navigation models used for non-interactive agents and the language-guided visual task completion benchmark, ALFRED, by introducing object navigation targets for EmBERT training. We achieve competitive performance on the ALFRED benchmark, and EmBERT marks the first transformer-based model to successfully handle the long-horizon, dense, multi-modal histories of ALFRED, and the first ALFRED model to utilize object-centric navigation targets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alessandro Suglia (25 papers)
  2. Qiaozi Gao (20 papers)
  3. Jesse Thomason (65 papers)
  4. Govind Thattai (25 papers)
  5. Gaurav Sukhatme (30 papers)
Citations (66)