Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Action-Based Representation Learning for Autonomous Driving (2008.09417v2)

Published 21 Aug 2020 in cs.CV, cs.LG, and cs.RO

Abstract: Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi Xiao (49 papers)
  2. Felipe Codevilla (10 papers)
  3. Christopher Pal (97 papers)
  4. Antonio M. Lopez (19 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.