Emergent Mind

Driving Everywhere with Large Language Model Policy Adaptation

Published Feb 8, 2024 in cs.RO , cs.AI , cs.CL and


Adapting driving behavior to new environments, customs, and laws is a long-standing problem in autonomous driving, precluding the widespread deployment of autonomous vehicles (AVs). In this paper, we present LLaDA, a simple yet powerful tool that enables human drivers and autonomous vehicles alike to drive everywhere by adapting their tasks and motion plans to traffic rules in new locations. LLaDA achieves this by leveraging the impressive zero-shot generalizability of large language models (LLMs) in interpreting the traffic rules in the local driver handbook. Through an extensive user study, we show that LLaDA's instructions are useful in disambiguating in-the-wild unexpected situations. We also demonstrate LLaDA's ability to adapt AV motion planning policies in real-world datasets; LLaDA outperforms baseline planning approaches on all our metrics. Please check our website for more details: https://boyiliee.github.io/llada.


  • Introduces Large Language Driving Assistant (LLaDA) that uses LLMs to adapt driving policies to local traffic regulations.

  • LLaDA works in a training-free mechanism leveraging zero-shot generalizability of LLMs for immediate policy adaptation in novel environments.

  • Empirical validation shows LLaDA exceeds baseline methods in adjusting autonomous vehicle (AV) policies, enhancing both decision making and performance.

  • Identifies challenges for real-time application and the need for more precise AV-specific models, but underscores the framework's potential in advancing AI in autonomous driving.

Overview of LLaDA: Transforming Autonomous Driving with LLMs

The demand for autonomous vehicles (AVs) to operate seamlessly across various geographic locations presents a significant challenge due to divergent traffic laws and norms. Addressing this concern, the paper "Driving Everywhere with Large Language Model Policy Adaptation" introduces a pioneering framework named Large Language Driving Assistant (LLaDA). This system leverages the power of LLMs to interpret traffic rules and adapt driving policies for both human drivers and AVs, fostering a deeper integration of AI in autonomous driving.

Utilizing LLMs for Driving Policy Adaptation

One of the core innovations of LLaDA is its ability to adapt driving behavior by understanding local traffic regulations through LLMs, such as GPT-4V. This approach is illuminated by a three-step method where initially an executable policy is generated. When faced with unforeseen circumstances, the Traffic Rule Extractor (TRE) component meticulously extracts pertinent traffic rules from the local driver's handbook, aided by LLMs. Subsequently, the original plan is adjusted to align with these rules, effectively enhancing decision-making in novel driving scenarios.

Empirical Validation and Practical Applications

The efficacy of LLaDA was extensively validated through user studies and real-world dataset analyses. Significantly, LLaDA demonstrated superior performance in adjusting AV motion planning policies when compared to baseline methods across various metrics. The capability of LLaDA to instantly integrate into existing autonomous driving systems without requiring additional training illustrates its practical appeal and wide-ranging applicability.

Pioneering Contributions

  • Innovative Mechanism: The introduction of a training-free mechanism, LLaDA, that exploits the zero-shot generalizability of LLMs stands as a noteworthy contribution. This enables immediate adaptation of driving policies to unacquainted environments, showcasing the flexibility and utility of the proposed framework.

  • Performance Enhancement: Through rigorous evaluation, LLaDA has been evidenced to surpass previous state-of-the-art solutions in adapting driving policies for AVs and assisting human drivers, exemplifying the impactful advancements brought about by this research.

  • LLMs for Robotic Reasoning and Autonomous Driving: The paper extensively discusses related works in the realms of utilizing LLMs for robotic reasoning and specifically in autonomous driving, charting the progression towards the novel application of these models for traffic rule adaptation. This not only contextualizes the significance of LLaDA but also propels forward the discourse on the fusion of LLMs with autonomous vehicular technologies.

Future Directions

Despite its notable achievements, the LLaDA framework still encounters challenges, particularly in real-time application within AV systems due to the computational demands of running an LLM in the control loop. Moreover, the dependency on the accuracy of scene descriptions for optimal performance calls for further enhancements, including the development of AV-specific models capable of generating more precise descriptions.


LLaDA marks a significant stride towards the realization of truly autonomous vehicles that can navigate diversely regulated geographical landscapes. By harnessing the interpretative and adaptive capabilities of LLMs, this framework sets a new precedent for the integration of advanced AI technologies in the domain of autonomous driving. As we progress, the continued evolution of LLaDA and similar systems will undoubtedly play a pivotal role in overcoming the current limitations and unlocking the full potential of autonomous vehicles worldwide.

Get summaries of trending AI/ML papers delivered straight to your inbox

Unsubscribe anytime.