Papers
Topics
Authors
Recent
2000 character limit reached

Autoware Autonomous Driving Stack

Updated 18 November 2025
  • Autoware Autonomous Driving Stack is a modular, open-source software platform that integrates perception, mapping, localization, planning, and control using a ROS 2 middleware architecture.
  • It features a layered system design with sensor drivers, efficient message flows, and advanced perception modules that support real-time operations in both simulation and real-world environments.
  • The platform emphasizes interoperability and extensibility through standardized interfaces and integration with simulation ecosystems, making it a reference for both academic and industrial research.

The Autoware Autonomous Driving Stack is a modular open-source software platform for autonomous vehicles, designed to deliver end-to-end perception, mapping, localization, planning, and control within a scalable distributed ROS 2 architecture. Autoware targets research, prototyping, and deployment of autonomous driving systems in real and simulated environments, emphasizing interoperability, reproducibility, and extensibility via standardized message interfaces and integration with simulation ecosystems (Jung et al., 31 Jan 2025). The stack, developed and maintained under the Autoware Foundation, is frequently adopted as a reference platform for both academic and industrial autonomous vehicle research.

1. System Architecture and ROS 2 Integration

Autoware is structured as a four-layer stack: sensors & infrastructure, ROS 2 middleware (using DDS), core functional pipeline (localization, perception, planning, control), and vehicle interface. Sensor drivers (LiDAR, GNSS/INS, Radar, Cameras) interface with the middleware via standardized topics such as sensor_msgs/PointCloud2, sensor_msgs/Image, and sensor_msgs/NavSatFix. Communication utilizes a publisher–subscriber pattern with messages defined in .msg files, supporting both functional (30 Hz) and high-frequency (100 Hz) operation with low CPU load and linear serialization latency scaling with payload size (Jung et al., 31 Jan 2025).

The system graphically subdivides modules into functional nodes in a directed acyclic computational graph, with message flows coordinated through the ROS 2 DDS backend (Fast DDS by default). Nodes can be instantiated and orchestrated via launch files, facilitating reproducible experiments and simulation. Key topics relevant to the core functional pipeline include /lidar_front_left/points, /gnss/fix, /imu/data, /state_estimation/ekf/odometry, and /vector_map among others (Sauerbeck et al., 2023).

2. Perception Modules: Sensor Processing and Object Detection

Autoware supports a broad range of perception modules designed for both model-based and learning-based detection. Native drivers ingest sensor output and publish canonical message types. LiDAR point-cloud ingestion is typically handled by velodyne_driver or analogous drivers, with downstream model-based filtering modules for ground segmentation (e.g., ray-ground filter, RANSAC, L-Shape fitting) (Zang et al., 2022, Jung et al., 31 Jan 2025). Learning-based approaches such as PointPillars, YOLOX, and Frustum PointNet can be integrated for improved object detection and classification, with instance clustering typically performed via Euclidean distance-based algorithms.

In high-performance applications, custom C++ nodes can circumvent default clustering for real-time operation, as shown in racing configurations where lane-occupancy vectors are computed per-scan, reducing per-frame latency from ≈700 ms to ≈20 ms (Zang et al., 2022). The perception output feeds into downstream planning modules, either as clustered object arrays or as track-annotated trajectories for further processing.

3. Mapping, Localization, and Sensor Fusion

Autoware’s mapping and localization pipeline supports both offline and online operation. For mapping, multi-LiDAR systems leverage a per-sensor approach: raw point clouds from four LiDAR sensors are deskewed, voxel-filtered, and individually extrinsically calibrated to the base_link frame (TiSE(3)T_i \in SE(3)). Each preprocessed scan is then registered against an offline map using the KISS-ICP algorithm, optimizing the pose transform XSE(3)X \in SE(3) by minimizing point-to-point error:

J(X)=kXpkmk2,J(X) = \sum_k \| X p_k - m_k \|^2,

with Gauss-Newton iterations applied to the 6D twist vector and initial estimates provided by the external EKF (Sauerbeck et al., 2023). The fusion of four independent ICP pose streams with GNSS, IMU, wheel odometry, and vehicle dynamics is handled via EKF, producing high-frequency ego-state estimates.

Offline mapping includes KISS-ICP mapping, interactive SLAM post-processing (loop closure), and semantic map generation. Semantic maps involve intensity-based lane marking extraction, curb detection, and camera–LiDAR projection, finalized via manual editing in Vector Map Builder to produce Lanelet2 or VectorMap outputs. Georeferencing nodes align generated maps to UTM/GNSS coordinates.

Standard launch files like multi_lidar_localization_mapping.launch.py are provided to instantiate the complex node graph required for mapping and localization. The output nav_msgs/Odometry at ~50 Hz integrates seamlessly with Autoware planners and controllers.

4. Planning and Control

Autoware supports both rule-based and optimization-based planning paradigms. Global planners typically utilize A* search on topological waypoint graphs, while local planners implement Hybrid-A*, lattice, or sampling-based algorithms, optimizing a cost function over trajectories subject to dynamic constraints (Jung et al., 31 Jan 2025). Trajectories can be generated via quintic polynomials constrained by curvature, velocity, and acceleration bounds.

For dynamic environments (e.g., racing), custom behavior nodes perform lane-switching and candidate raceline selection based on filtered perception outputs, with ACADO-based Model Predictive Control (MPC) modules solving quadratic programs over kinematic bicycle models:

minukk=0N1(xkxkref)TQ(xkxkref)+ukTRuk s.t.xk+1=xk+vkcos(ψk+βk)Δt yk+1=yk+vksin(ψk+βk)Δt ψk+1=ψk+(vk/LR)sin(βk)Δt vk+1=vk+akΔt,\begin{align*} \min_{u_k} \quad & \sum_{k=0}^{N-1} (x_k-x_k^{ref})^T Q (x_k-x_k^{ref}) + u_k^T R u_k \ \text{s.t.} \quad & x_{k+1} = x_k + v_k \cos(\psi_k+\beta_k) \Delta t \ & y_{k+1} = y_k + v_k \sin(\psi_k+\beta_k) \Delta t \ & \psi_{k+1} = \psi_k + (v_k/L_R) \sin(\beta_k) \Delta t \ & v_{k+1} = v_k + a_k \Delta t, \end{align*}

where input and state constraints are enforced, with Q/R weights tunable to favor precision or speed (Zang et al., 2022). Output control commands (throttle, brake, steering) are published via standardized ROS topics.

5. Simulation Ecosystem and System Validation

Autoware is tightly coupled with high-fidelity simulators such as LGSVL and CARLA, supported through native bridges that map ROS 2 topics, coordinate frames, and actuator interfaces. The LGSVL Simulator, built on Unity3D with PhysX and HDRP, supports physics-based sensor models (LiDAR, camera, radar, GPS, IMU), digital twin environments, and full-stack SIL/HIL workflows (Rong et al., 2020). Vehicles are configured via JSON, sensors can be added or modified at runtime, and environments can be imported as Lanelet2/OpenDRIVE/VectorMap formats.

The CARLA-Autoware-Bridge provides similar closed-loop support, with automatic coordinate and timestamp transformations, real-time actuation via remapped Ackermann commands, and monitoring of latency and CPU load (< 15 ms callback latency for camera/LiDAR pipelines, average frame rates above 35 FPS) (Kaljavesi et al., 17 Feb 2024). Simulation scenarios validate perception, localization, planning, and control modules in a reproducible fashion, supporting both component and system-level regression experiments.

6. Specialized Pipelines and Research Extensions

The modularity of Autoware facilitates integration and evaluation of advanced modules, as shown in both domain-specific and perception research integrations. The Autoware Mini fork enables benchmarking of state-of-the-art pedestrian motion prediction networks (CVM, PECNet, SGNet, GATraj, MUSE-VAE) within a ROS-based pipeline, publishing candidate pedestrian trajectories in real-time and evaluating them with both traditional (minADE, minFDE) and dynamic (DynADE, DynFDE) error metrics on real-world datasets (Zabolotnii et al., 22 Oct 2024). Best-practice guidelines recommend wrapping each predictor as a standalone ROS node, with inference times and accuracy benchmarks directly comparable under standardized conditions.

Racing, competitive, and multi-agent environments have been addressed via custom extensions for high-speed lane-selection, global raceline optimization, and real-time control, achieving performance metrics compatible with real-world and simulated validation (e.g., sub-100 s simulated lap times using modified lane logic and tuned MPC controllers) (Zang et al., 2022).

7. Performance Benchmarks and Limitations

Measured middleware performance in Autoware (Fast DDS) demonstrates end-to-end latencies of 153 µs (30 Hz functional) to 38,075 µs (10 MB payload), with CPU utilization <10% and zero message loss up to 100 Hz (Jung et al., 31 Jan 2025). In sensor fusion with multi-LiDAR mapping, trajectory RMSE (APE) improved from 12.7 m raw (KISS-ICP only) to 3.6 m after interactive SLAM post-processing over a 3 km campus route (Sauerbeck et al., 2023). Real-time localization is performed at 10–15 Hz per-LiDAR and fused at 50 Hz by the EKF, with the entire pipeline running on a standard desktop-class CPU.

Certain implementation details, such as full node interface specification, explicit Jacobians, and fine-grained latency breakdowns are not always included in published descriptions and must be supplied by practitioners when reproducing or extending the reference stack.


References:

  • "Multi-LiDAR Localization and Mapping Pipeline for Urban Autonomous Driving" (Sauerbeck et al., 2023)
  • "Winning the 3rd Japan Automotive AI Challenge -- Autonomous Racing with the Autoware.Auto Open Source Software Stack" (Zang et al., 2022)
  • "Pedestrian motion prediction evaluation for urban autonomous driving" (Zabolotnii et al., 22 Oct 2024)
  • "Open-Source Autonomous Driving Software Platforms: Comparison of Autoware and Apollo" (Jung et al., 31 Jan 2025)
  • "LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving" (Rong et al., 2020)
  • "CARLA-Autoware-Bridge: Facilitating Autonomous Driving Research with a Unified Framework for Simulation and Module Development" (Kaljavesi et al., 17 Feb 2024)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Autoware Autonomous Driving Stack.