Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DialMAT: Dialogue-Enabled Transformer with Moment-Based Adversarial Training (2311.06855v1)

Published 12 Nov 2023 in cs.CV, cs.CL, and cs.RO

Abstract: This paper focuses on the DialFRED task, which is the task of embodied instruction following in a setting where an agent can actively ask questions about the task. To address this task, we propose DialMAT. DialMAT introduces Moment-based Adversarial Training, which incorporates adversarial perturbations into the latent space of language, image, and action. Additionally, it introduces a crossmodal parallel feature extraction mechanism that applies foundation models to both language and image. We evaluated our model using a dataset constructed from the DialFRED dataset and demonstrated superior performance compared to the baseline method in terms of success rate and path weighted success rate. The model secured the top position in the DialFRED Challenge, which took place at the CVPR 2023 Embodied AI workshop.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Kanta Kaneda (5 papers)
  2. Ryosuke Korekata (6 papers)
  3. Yuiga Wada (7 papers)
  4. Shunya Nagashima (3 papers)
  5. Motonari Kambara (11 papers)
  6. Yui Iioka (2 papers)
  7. Haruka Matsuo (3 papers)
  8. Yuto Imai (10 papers)
  9. Takayuki Nishimura (2 papers)
  10. Komei Sugiura (40 papers)

Summary

We haven't generated a summary for this paper yet.