Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multitask Multimodal Prompted Training for Interactive Embodied Task Completion (2311.04067v1)

Published 7 Nov 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81% success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Georgios Pantazopoulos (7 papers)
  2. Malvina Nikandrou (8 papers)
  3. Amit Parekh (5 papers)
  4. Bhathiya Hemanthage (2 papers)
  5. Arash Eshghi (23 papers)
  6. Ioannis Konstas (40 papers)
  7. Verena Rieser (58 papers)
  8. Oliver Lemon (39 papers)
  9. Alessandro Suglia (25 papers)
Citations (7)