Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Agent Imitation Learning for Driving Simulation (1803.01044v1)

Published 2 Mar 2018 in cs.AI

Abstract: Simulation is an appealing option for validating the safety of autonomous vehicles. Generative Adversarial Imitation Learning (GAIL) has recently been shown to learn representative human driver models. These human driver models were learned through training in single-agent environments, but they have difficulty in generalizing to multi-agent driving scenarios. We argue these difficulties arise because observations at training and test time are sampled from different distributions. This difference makes such models unsuitable for the simulation of driving scenes, where multiple agents must interact realistically over long time horizons. We extend GAIL to address these shortcomings through a parameter-sharing approach grounded in curriculum learning. Compared with single-agent GAIL policies, policies generated by our PS-GAIL method prove superior at interacting stably in a multi-agent setting and capturing the emergent behavior of human drivers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Raunak P. Bhattacharyya (2 papers)
  2. Derek J. Phillips (3 papers)
  3. Blake Wulfe (14 papers)
  4. Jeremy Morton (9 papers)
  5. Alex Kuefler (8 papers)
  6. Mykel J. Kochenderfer (215 papers)
Citations (111)

Summary

We haven't generated a summary for this paper yet.