D-AWSIM: Driving & Drone Simulation
- D-AWSIM is a dual-framework simulator that supports distributed autonomous driving and 3D immersive drone operations with advanced client-server architectures.
- It leverages Unity for immersive visualization and integrates external algorithmic components to enable dynamic map generation and real-time event-driven updates.
- The platform demonstrates high scalability, with benchmarks showing support for 280+ vehicles and multiple LiDAR sensors, while providing flexible drone routing and service orchestration.
D-AWSIM refers to two distinct high-fidelity simulation frameworks targeting advanced research in autonomous urban mobility domains: (1) a distributed, scalable simulator for autonomous driving and dynamic map generation; and (2) a 3D immersive simulator for Drone-as-a-Service (DaaS) operations. Both frameworks leverage Unity for 3D visualization and provide tight integration with external algorithmic components, but differ fundamentally in their system architecture, performance focus, and application scope (Ito et al., 12 Nov 2025, Lin et al., 2023).
1. System Architectures
1.1 Distributed Autonomous Driving Simulator
D-AWSIM, as proposed in "D-AWSIM: Distributed Autonomous Driving Simulator for Dynamic Map Generation Framework," advances large-scale simulation by distributing vehicle, sensor, and NPC agent computation across multiple machines. The system builds on Unity’s AWSIM (Advanced World Simulator for Intelligent Mobility) with multiplayer support via Unity’s Netcode for GameObjects (NGO) library and transport over UDP. Its architecture comprises a single "host" (also acting as a client) and multiple "clients," each running independent AWSIM instances responsible for a subset of world objects—vehicles, pedestrians, or sensor actors.
Object states are synchronized via a periodic exchange:
- Each client serializes its Local List (object positions, IDs, state) using RPC and transmits it to the host.
- The host aggregates these into a Global List, which is reliably broadcast back.
- Clients update their local simulation scene according to the received Update List.
- Sensor and object data are asynchronously streamed to an external Dynamic Map (DM) server over TCP with JSON-encoded payloads for further data fusion and analysis.
Workload allocation is statically partitioned: the host typically handles the ego-vehicle and NPCs, Client 1 performs ego-camera processing, and Client 2 processes multiple high-throughput roadside LiDARs. This partitioning empirically balances compute and GPU utilization at the tested scale.
1.2 Immersive 3D Simulator for Drone-as-a-Service
The other D-AWSIM instance, detailed in "Immersive 3D Simulator for Drone-as-a-Service," utilizes a Unity-based 3D front-end (the Unity client) coupled with a lightweight Python server, forming an event-driven, modular architecture. Key architectural elements:
- Front-end renders a full 3D city and "skyway" network (nodes: building rooftops serving as drone pads; edges: aerial corridors), provides user interfaces in Edit and Runtime modes, and publishes real-time flight and telemetry data.
- The Python back-end executes researcher-defined algorithms for drone routing, service composition, or swarm allocation, in response to node-arrival events generated during simulation runtime.
- Communication between Unity and the Python process is asynchronous, non-deterministic, and based on discrete event notifications, enabling dynamic adaptation (e.g., segment failure, scenario changes) without event collisions or rollbacks.
2. Data Management and Dynamic Map
In autonomous driving, D-AWSIM introduces a Dynamic Map (DM) generation pipeline to support V2V/V2I cooperative perception:
- Each simulator node runs a DM-Manager polling object states at the end of each simulation frame, constructing sets .
- These sets are streamed via TCP to a central DM server implemented as a Data Stream Management System (DSMS), which merges time-indexed streams and exposes SQL-like query interfaces for dynamic and semi-static map layers.
- Coordinate system alignment exploits rigid transformation: , with minimum-variance fusion integrating multi-sensor detections:
Event filtering operates via temporal observation thresholds. Map layers distinguish semi-static (lane, signage), dynamic (vehicles, pedestrians), and near-real-time (hazards/events) information, exposing CAM/DENM-equivalent services.
In the drone simulator, data collection is structured per drone and per time step. Output CSVs record timestamps, IDs, node/segment, battery (Joules or %), per-leg energy usage, hover/charge time, and payload, supporting detailed post-hoc analysis, algorithm benchmarking, and pipeline integration for downstream tasks (e.g., ML model training or comparative studies).
3. Performance Evaluation
3.1 Distributed Driving Platform
D-AWSIM (autonomous driving) demonstrates increased throughput and scalability over single-machine AWSIM baselines. Using three high-end servers on a gigabit network:
- End-to-end latency () and transmission interval () are both maintained under 100 ms with up to 280 simultaneously simulated vehicles, compared to 120 vehicles at equivalent latency on a single machine.
- Beyond 280 vehicles, of runs at 320 vehicles occasionally exceed the latency budget.
- LiDAR processing is distributable across up to 10 sensors per node: 8 Velodyne VLP-16 units on one client run at ≥30 Hz under load (declining from 60 Hz with 2 up to 30 Hz with 10).
- The major bottleneck is ROS 2 message conversion/transport to Autoware, consuming approximately 90% of total .
Table: Summary of Distributed Driving Simulation Metrics
| Metric | Single Machine | D-AWSIM (3 nodes) |
|---|---|---|
| Vehicle Throughput () | ≤120 | 280 |
| Max LiDARs @ ≥30 Hz | N/A | 8 |
| (dominant source) | N/A | ROS 2 transfer |
3.2 Drone Simulator
Large-scale performance benchmarks are not included in the drone paper, but preliminary experiments validate real-time simulation with 100+ nodes and ten drones at 1–5× wall-clock acceleration on a midrange laptop. Non-deterministic event-driven updates and external Python algorithm hooks allow for high experimental flexibility.
4. Operational Modes and Algorithm Integration
Both D-AWSIM platforms provide explicit support for two main operational modes:
- Edit Mode: Graphical or programmatic design of environments—urban skyways in the drone simulator (supporting network topology editing, waypoints, segment-level metadata, global parameter settings, JSON import/export); scenario and agent configuration in the driving simulator.
- Runtime Mode: Real-time simulation execution with tight client-server integration permitting dynamic scenario adaptation, event injection (segment failures, new deliveries), algorithm-in-the-loop evaluation, and real-time metric labeling.
Algorithmic logic is externalized: the back-end (Python server for drones; AWSIM+DM for driving) runs user-defined or plug-in algorithms for agent control, composition, and adaptation. In the drone system, any Python algorithm subscribing to node-arrival events and returning waypoint directives is immediately compatible. In the driving system, Unity clients and the host can be assigned custom control loops and sensor processing logic.
5. Use Cases and Extensibility
The primary use cases span cooperative perception, sensor fusion, algorithmic benchmarking, and protocol development in both driving and drone domains.
- Autonomous Driving: Research into cooperative perception (up to 280 vehicles with 8 roadside LiDARs), evaluation of V2X protocols (CAM/DENM), edge-computing studies with DSMS-backed dynamic maps, and stress-testing hazard/event notification systems. Fully integrated with Autoware via a JSON-to-ROS 2 bridge, enabling real-time planner validation in complex scenarios (e.g., the blind intersection hazard-avoidance demo).
- DaaS and Aerial Mobility: Single- and swarm-based delivery strategies, dynamic rerouting on graph failures, energy-optimal routing, and extensive parameter sweeps for energy and operational benchmarking. Data export formats (CSV for logs, JSON for scenarios) facilitate rapid prototyping, ML integration, and multi-agent comparative studies.
Extensibility is a core design principle for both simulators. Researchers may integrate new algorithms by connecting custom code to existing event or data interfaces, or by extending DM/DSMS schemas and query logic. The host/client architecture in driving is conducive to future dynamic load balancing, containerized scaling, and more fine-grained task migration. The drone system’s split Unity/Python architecture decouples visualization from control, enabling rapid evolution of algorithmic modules and scenario logic.
6. Identified Limitations and Future Directions
Prominent limitations in the current D-AWSIM implementations include:
- Autonomous Driving D-AWSIM: Major end-to-end latency dominated by ROS 2 messaging; no semi-static map layer or native CAM/DENM output from DSMS at present; static workload partitioning; limited support for temporary road asset layers (e.g., work zones).
- Drone D-AWSIM: No reported large-scale distributed benchmarks; scope is urban skyways and delivery but not full airspace or ATC simulation.
Future research avenues include dynamic node/sensor migration for balanced computing load, native generation of V2X messages from DM queries, more elastic deployment (container/Kubernetes orchestration), support for additional map data types, and direct coupling to advanced ML or RL training pipelines. These enhancements are likely to further solidify D-AWSIM as a reference simulator in both cooperative autonomous driving and aerial urban mobility research contexts (Ito et al., 12 Nov 2025, Lin et al., 2023).