Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Drive Right: Promoting Autonomous Vehicle Education Through an Integrated Simulation Platform (2302.08613v1)

Published 16 Feb 2023 in cs.HC

Abstract: Autonomous vehicles (AVs) are being rapidly introduced into our lives. However, public misunderstanding and mistrust have become prominent issues hindering the acceptance of these driverless technologies. The primary objective of this study is to evaluate the effectiveness of a driving simulator to help the public gain an understanding of AVs and build trust in them. To achieve this aim, we built an integrated simulation platform, designed various driving scenarios, and recruited 28 participants for the experiment. The study results indicate that a driving simulator effectively decreases the participants' perceived risk of AVs and increases perceived usefulness. The proposed methodologies and findings of this study can be further explored by auto manufacturers and policymakers to provide user-friendly AV design.

Citations (2)

Summary

  • The paper demonstrates that an integrated simulation platform can effectively reduce public risk perceptions and enhance trust in autonomous vehicles.
  • The paper employs industry-standard AV software stacks and a high-fidelity simulator to compare manual driving struggles with smooth AV demonstrations across challenging scenarios.
  • The paper finds that transparent displays of sensor data and decision-making processes improve user understanding, though they do not significantly boost intentions to adopt AVs.

This paper, "Drive Right: Promoting Autonomous Vehicle Education Through an Integrated Simulation Platform" (2302.08613), addresses the challenge of public mistrust and misunderstanding regarding autonomous vehicles (AVs). The authors propose and evaluate the effectiveness of an integrated driving simulation platform as a tool for educating the public and building trust.

The core problem identified is the gap between rapid advancements in AV technology and the public's readiness for its adoption, often stemming from a lack of understanding and associated fear of the unknown or potential malfunctions. Traditional methods like real-car demonstrations are expensive, time-consuming, and may be intimidating for hesitant individuals. Simulation offers a safe, cost-effective, and controlled environment for interaction and education.

To implement this, the authors built a platform integrating key industry-standard AV software stacks with a high-fidelity simulator:

  • SVL Simulator: An open-source, Unity-based simulator supporting real-time sensor data (Camera, LiDAR, Radar, GPS, IMU) and environmental controls (time, weather). It allows importing 3D digital twins of real locations.
  • Baidu Apollo 5.0: An open-source, industrially used SAE Level 4 AD platform providing modules for mapping, localization, perception, prediction, planning, and control. Its CyberRT bridge connects to SVL. The Dreamview web interface provides detailed real-time visualization of the AV's state, sensor inputs, and planning outputs.
  • Autoware Auto: Another open-source AV software stack focused on specific applications like valet parking. It uses LiDAR for localization and planning and connects to SVL via ROS2. The rviz2 interface visualizes LiDAR data and vehicle status.
  • Logitech G920: A consumer-grade force feedback steering wheel and pedal set used for participant manual driving and providing a high-fidelity interface.

The system architecture involves two gaming laptops (Intel i7, Nvidia GTX 1080, Ubuntu 20.04) connected via Ethernet. One runs the SVL Simulator, while the other runs either Apollo or Autoware. This setup provides sufficient computational power for smooth operation and real-time communication between the simulator and the AV software stack.

The research methodology involved a human paper with 28 licensed drivers. Participants first completed a survey to assess their pre-existing understanding and perception of AVs across six dimensions: perceived risk, perceived usefulness, perceived ease-of-use, technical competence, situational management, and behavioral intention, using a seven-point Likert scale.

Next, participants practiced using the simulator with the Logitech G920 setup in a third-person view. They then manually drove through five predefined scenarios designed to present common or slightly challenging traffic situations:

  1. Vehicle Following: Following a car with unpredictable speed changes.
  2. Lane Block: Encountering a stopped vehicle requiring a lane change.
  3. Pedestrian Jaywalking: A sudden pedestrian emergence requiring emergency braking.
  4. City Traffic: Navigating an urban environment with traffic lights and unprotected turns (using a digital twin map).
  5. Valet Parking: Driving through a parking lot and performing reverse parking (using a digital twin map).

After attempting each scenario manually, participants observed a demonstration of an AV (using Apollo for scenarios 1-4, Autoware for scenario 5) handling the exact same situation within the simulator. During the AV demonstration, the corresponding AV platform's visualization interface (Apollo Dreamview or Autoware rviz2) was displayed, showing how the AV perceived its environment, planned its route, and executed maneuvers. Explanations were provided in non-technical language, highlighting concepts like sensor data interpretation (camera object detection in Apollo, LiDAR point clouds in Autoware) and HD mapping.

After completing all scenarios and demonstrations, participants filled out a second survey, repeating the quantitative questions to measure changes in perception. They also rated the usefulness of the information displayed during the AV demonstration and provided qualitative feedback.

Practical Implications and Findings:

The paper yielded several key findings with practical implications:

  1. Perception Shift: The simulation demonstration significantly reduced participants' perceived risk of AVs and significantly increased their perceived usefulness. This suggests that direct exposure, even in a simulated environment, can effectively address concerns about safety and highlight potential benefits.
  2. Demonstrating Capability: Comparing their own struggles in challenging scenarios (e.g., collisions during vehicle following, lane block, or pedestrian jaywalking) with the AV's smooth, safe handling of the same situations was crucial for participants to appreciate the AV's capabilities.
  3. Information Transparency: Participants highly rated the usefulness of displayed information like routing, sensor data (vehicle, pedestrian, traffic indicator sensing), and prediction of other agents' movements. While the planning/control graphs were less understandable to non-experts, the overall transparency provided by the AV interfaces helped build confidence by showing how the AV makes decisions. This implies that future in-car HMI (Human-Machine Interface) designs should prioritize displaying relevant environmental perception and planned actions.
  4. Behavioral Intention: Despite improved perception of risk and usefulness, participants' stated intention to use or purchase an AV did not significantly increase. The authors attribute this to already high baseline expectations and the current lack of commercially available, high-level AVs and supporting regulations. This highlights that while understanding and trust are important, external factors like availability and legal frameworks also heavily influence adoption.
  5. Simulation Limitations: The paper acknowledges limitations, such as the third-person view, lack of immersive hardware beyond the steering wheel, and the limited number of scenarios. These factors could impact the realism and effectiveness compared to more sophisticated setups (e.g., VR, motion platforms).
  6. Scalability and Cost-Effectiveness: The integrated platform using open-source software (Apollo, Autoware, SVL) and commercial hardware (Logitech G920, gaming laptops) offers a relatively low-cost and scalable approach for AV education and testing compared to real-world trials. The use of industrially relevant platforms means simulation results can potentially inform real-world performance.

Implementation Considerations:

Implementing a similar system for educational or demonstration purposes would involve:

  • Setting up the simulation environment: Installing SVL Simulator and potentially creating or acquiring relevant 3D maps and scenarios.
  • Integrating AV software: Cloning and building Apollo or Autoware (or other stacks) and configuring their communication bridges (CyberRT, ROS2) with SVL. This requires Linux environments (Ubuntu 20.04 used in the paper).
  • Hardware setup: Securing adequate computing resources (high-end CPUs and GPUs) and a realistic steering wheel/pedal set. Multiple machines might be needed depending on the simulation and AV software's resource demands.
  • Developing/selecting scenarios: Creating specific driving situations that effectively highlight AV capabilities relevant to the target audience's concerns (e.g., safety in emergencies, handling complex intersections, navigation).
  • Designing the user experience: Determining what information to display from the AV software (e.g., sensor data, planned path, state changes) and how to present it clearly alongside the simulation view. Providing non-technical explanations is crucial.
  • Evaluation: Designing pre- and post-demonstration assessments to measure the impact on user perception, understanding, and trust.

The paper concludes that such simulation platforms are valuable tools for AV education, successfully reducing perceived risk and increasing perceived usefulness. While not a complete solution for behavioral adoption (which depends on factors beyond understanding), they provide a practical method for demonstrating complex AV systems in an accessible and safe manner, which is essential for gaining public acceptance. Future work could expand scenarios, use more immersive simulation hardware, and potentially tailor information delivery to individual users.