Multitask Multimodal Prompted Training for Interactive Embodied Task Completion (2311.04067v1)
Abstract: Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81% success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena
- Georgios Pantazopoulos (7 papers)
- Malvina Nikandrou (8 papers)
- Amit Parekh (5 papers)
- Bhathiya Hemanthage (2 papers)
- Arash Eshghi (23 papers)
- Ioannis Konstas (40 papers)
- Verena Rieser (58 papers)
- Oliver Lemon (39 papers)
- Alessandro Suglia (25 papers)