Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AGIL: Learning Attention from Human for Visuomotor Tasks (1806.03960v1)

Published 1 Jun 2018 in cs.CV, cs.AI, and cs.LG

Abstract: When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents' performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ruohan Zhang (34 papers)
  2. Zhuode Liu (2 papers)
  3. Luxin Zhang (12 papers)
  4. Jake A. Whritner (2 papers)
  5. Karl S. Muller (2 papers)
  6. Mary M. Hayhoe (2 papers)
  7. Dana H. Ballard (4 papers)
Citations (69)