Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Benchmark for Cycling Close Pass Detection from Video Streams (2304.11868v2)

Published 24 Apr 2023 in cs.CV

Abstract: Cycling is a healthy and sustainable mode of transport. However, interactions with motor vehicles remain a key barrier to increased cycling participation. The ability to detect potentially dangerous interactions from on-bike sensing could provide important information to riders and policymakers. A key influence on rider comfort and safety is close passes, i.e., when a vehicle narrowly passes a cyclist. In this paper, we introduce a novel benchmark, called Cyc-CP, towards close pass (CP) event detection from video streams. The task is formulated into two problem categories: scene-level and instance-level. Scene-level detection ascertains the presence of a CP event within the provided video clip. Instance-level detection identifies the specific vehicle within the scene that precipitates a CP event. To address these challenges, we introduce four benchmark models, each underpinned by advanced deep-learning methodologies. For training and evaluating those models, we have developed a synthetic dataset alongside the acquisition of a real-world dataset. The benchmark evaluations reveal that the models achieve an accuracy of 88.13\% for scene-level detection and 84.60\% for instance-level detection on the real-world dataset. We envision this benchmark as a test-bed to accelerate CP detection and facilitate interaction between the fields of road safety, intelligent transportation systems and artificial intelligence. Both the benchmark datasets and detection models will be available at https://github.com/SustainableMobility/cyc-cp to facilitate experimental reproducibility and encourage more in-depth research in the field.

Summary

We haven't generated a summary for this paper yet.