Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Driving Style Encoder: Situational Reward Adaptation for General-Purpose Planning in Automated Driving (1912.03509v2)

Published 7 Dec 2019 in cs.RO, cs.AI, and cs.LG

Abstract: General-purpose planning algorithms for automated driving combine mission, behavior, and local motion planning. Such planning algorithms map features of the environment and driving kinematics into complex reward functions. To achieve this, planning experts often rely on linear reward functions. The specification and tuning of these reward functions is a tedious process and requires significant experience. Moreover, a manually designed linear reward function does not generalize across different driving situations. In this work, we propose a deep learning approach based on inverse reinforcement learning that generates situation-dependent reward functions. Our neural network provides a mapping between features and actions of sampled driving policies of a model-predictive control-based planner and predicts reward functions for upcoming planning cycles. In our evaluation, we compare the driving style of reward functions predicted by our deep network against clustered and linear reward functions. Our proposed deep learning approach outperforms clustered linear reward functions and is at par with linear reward functions with a-priori knowledge about the situation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sascha Rosbach (4 papers)
  2. Vinit James (2 papers)
  3. Simon Großjohann (4 papers)
  4. Silviu Homoceanu (6 papers)
  5. Xing Li (82 papers)
  6. Stefan Roth (97 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.