Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing Autonomous Driving for Safety: A Human-Centric Approach with LLM-Enhanced RLHF (2406.04481v1)

Published 6 Jun 2024 in cs.AI

Abstract: Reinforcement Learning from Human Feedback (RLHF) is popular in LLMs, whereas traditional Reinforcement Learning (RL) often falls short. Current autonomous driving methods typically utilize either human feedback in machine learning, including RL, or LLMs. Most feedback guides the car agent's learning process (e.g., controlling the car). RLHF is usually applied in the fine-tuning step, requiring direct human "preferences," which are not commonly used in optimizing autonomous driving models. In this research, we innovatively combine RLHF and LLMs to enhance autonomous driving safety. Training a model with human guidance from scratch is inefficient. Our framework starts with a pre-trained autonomous car agent model and implements multiple human-controlled agents, such as cars and pedestrians, to simulate real-life road environments. The autonomous car model is not directly controlled by humans. We integrate both physical and physiological feedback to fine-tune the model, optimizing this process using LLMs. This multi-agent interactive environment ensures safe, realistic interactions before real-world application. Finally, we will validate our model using data gathered from real-life testbeds located in New Jersey and New York City.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuan Sun (117 papers)
  2. Navid Salami Pargoo (3 papers)
  3. Peter J. Jin (5 papers)
  4. Jorge Ortiz (17 papers)
Citations (11)
X Twitter Logo Streamline Icon: https://streamlinehq.com