- The paper introduces CARLA as a flexible, open-source platform that democratizes autonomous driving research with rich urban assets.
- It details evaluations of modular, imitation, and reinforcement learning approaches under varied environmental conditions.
- Experiments reveal that modular pipelines and imitation learning generalize better than reinforcement learning in complex urban scenarios.
CARLA: An Open Urban Driving Simulator
Overview
The paper "CARLA: An Open Urban Driving Simulator" introduces CARLA, an open-source simulation platform developed for autonomous driving research. CARLA is intended to facilitate the development, training, and validation of autonomous urban driving systems by providing both open-source code and diverse digital assets such as urban layouts, buildings, and vehicles. The simulation platform also supports flexible specification of sensor suites and environmental conditions, making it a comprehensive tool for autonomous driving research.
Key Contributions
The main contributions of CARLA to the autonomous driving research community include:
- Open-source infrastructure: Provides an accessible platform that democratizes research by reducing the need for costly physical infrastructure.
- Rich digital assets: Offers an array of urban layouts, vehicle models, and dynamic environmental conditions to support realistic scenario testing.
- Flexible sensor configuration: Allows researchers to specify a variety of sensors, including RGB cameras and pseudo-sensors providing ground-truth depth and semantic segmentation.
- Controlled evaluation: Facilitates controlled scenario-based evaluation of various autonomous driving strategies.
Methodology
The paper details the use of CARLA to evaluate three distinct approaches to autonomous driving:
- Classic Modular Pipeline (MP): Comprises distinct subsystems for perception, planning, and control. The perception module employs semantic segmentation, while local planning is executed through a rule-based state machine and the control is managed via a PID controller.
- Imitation Learning (IL): Uses a deep neural network trained end-to-end via imitation learning. The network learns from a dataset of driving traces recorded by human drivers, guided by high-level commands similar to turn signals.
- Reinforcement Learning (RL): Implements deep reinforcement learning using the Asynchronous Advantage Actor-Critic (A3C) algorithm. The network is trained to navigate by maximizing a reward signal that incorporates factors such as speed, collision, and lane adherence.
Experiments and Results
The experiments are set in CARLA's two towns, each with a unique urban layout, and across six weather conditions. The tasks vary in complexity:
- Straight driving.
- Single turn navigation.
- Complex navigation (Navigation).
- Navigation with dynamic obstacles.
Results indicate that:
- Both MP and IL generalize better to new weather conditions than to a new town.
- IL often outperforms MP in new environments, though MP can be more robust in familiar settings.
- RL underperforms relative to both MP and IL, highlighting the brittleness and high data requirements of RL in complex real-world scenarios.
Analysis of Infractions
An in-depth analysis of infractions such as driving on the opposite lane, veering onto sidewalks, and collisions reveals:
- End-to-end approaches (IL and RL) are more prone to rare-event failures like pedestrian collisions.
- MP shows lower infraction rates concerning static objects, suggesting better rule-based scene interpretation.
- RL’s lower collision rate with pedestrians might stem from its high negative reward for such incidents.
Implications and Future Work
The research underscores the challenges of generalizing autonomous driving systems to new environments and conditions, emphasizing the need for extensive training data and robust learning algorithms. CARLA’s open-access nature allows for continuous improvement and expansion by the research community. Future developments could include more advanced non-player vehicle behavior, integration of additional sensor types, and incorporation of more varied urban environments.
Conclusion
CARLA presents a significant tool for the autonomous driving research community, offering a realistic and flexible simulation environment for developing and testing driving strategies. By providing a common platform with rich assets and capabilities, CARLA has the potential to expedite advancements in autonomous driving technologies. Researchers are encouraged to leverage CARLA for both the development of new methodologies and the rigorous benchmarking of existing approaches.