Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision-based Vehicle Re-identification in Bridge Scenario using Flock Similarity (2403.07752v1)

Published 12 Mar 2024 in cs.CV

Abstract: Due to the needs of road traffic flow monitoring and public safety management, video surveillance cameras are widely distributed in urban roads. However, the information captured directly by each camera is siloed, making it difficult to use it effectively. Vehicle re-identification refers to finding a vehicle that appears under one camera in another camera, which can correlate the information captured by multiple cameras. While license plate recognition plays an important role in some applications, there are some scenarios where re-identification method based on vehicle appearance are more suitable. The main challenge is that the data of vehicle appearance has the characteristics of high inter-class similarity and large intra-class differences. Therefore, it is difficult to accurately distinguish between different vehicles by relying only on vehicle appearance information. At this time, it is often necessary to introduce some extra information, such as spatio-temporal information. Nevertheless, the relative position of the vehicles rarely changes when passing through two adjacent cameras in the bridge scenario. In this paper, we present a vehicle re-identification method based on flock similarity, which improves the accuracy of vehicle re-identification by utilizing vehicle information adjacent to the target vehicle. When the relative position of the vehicles remains unchanged and flock size is appropriate, we obtain an average relative improvement of 204% on VeRi dataset in our experiments. Then, the effect of the magnitude of the relative position change of the vehicles as they pass through two cameras is discussed. We present two metrics that can be used to quantify the difference and establish a connection between them. Although this assumption is based on the bridge scenario, it is often true in other scenarios due to driving safety and camera location.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. V. Eckstein, A. Schumann, and A. Specker, “Large scale vehicle re-identification by knowledge transfer from simulated data and temporal attention,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 2626–2631.
  2. E. Kamenou, J. M. del Rincon, P. Miller, and P. Devlin-Hill, “Multi-level deep learning vehicle re-identification using ranked-based loss functions,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 9099–9106.
  3. X. Liu, W. Liu, T. Mei, and H. Ma, “PROVID: Progressive and multimodal vehicle reidentification for large-scale urban surveillance,” IEEE Transactions on Multimedia, vol. 20, no. 3, pp. 645–658, 2018.
  4. N. Peri, P. Khorramshahi, S. S. Rambhatla, V. Shenoy, S. Rawat, J.-C. Chen, and R. Chellappa, “Towards real-time systems for vehicle re-identification, multi-camera tracking, and anomaly detection,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 2648–2657.
  5. Z. Zheng, T. Ruan, Y. Wei, Y. Yang, and T. Mei, “VehicleNet: Learning robust visual representation for vehicle re-identification,” IEEE Transactions on Multimedia, vol. 23, pp. 2683–2693, 2021.
  6. X. Ning, K. Gong, W. Li, L. Zhang, X. Bai, and S. Tian, “Feature refinement and filter network for person re-identification,” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 31, no. 9, pp. 3391–3402, SEP. 2021.
  7. X. Ning, K. Gong, W. Li, and L. Zhang, “JWSAA: Joint weak saliency and attention aware for person re-identification,” NEUROCOMPUTING, vol. 453, pp. 801–811, SEP. 2021.
  8. C. Yan, G. Pang, X. Bai, C. Liu, X. Ning, L. Gu, and J. Zhou, “Beyond triplet loss: Person re-identification with fine-grained difference-aware pairwise loss,” IEEE TRANSACTIONS ON MULTIMEDIA, vol. 24, pp. 1665–1677, 2022.
  9. T. Si, F. He, Z. Zhang, and Y. Duan, “Hybrid contrastive learning for unsupervised person re-identification,” IEEE TRANSACTIONS ON MULTIMEDIA, vol. 25, pp. 4323–4334, 2023.
  10. N. Jiang, Y. Xu, Z. Zhou, and W. Wu, “Multi-attribute driven vehicle re-identification with spatial-temporal re-ranking,” in 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, pp. 858–862.
  11. X. Chen, H. Sui, J. Fang, W. Feng, and M. Zhou, “Vehicle re-identification using distance-based global and partial multi-regional feature learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 1276–1286, 2021.
  12. S. Jaiswal, P. Chakraborty, T. Huang, and A. Sharma, “Traffic intersection vehicle movement counts with temporal and visual similarity based re-identification,” in 2023 8th International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), 2023, pp. 1–6.
  13. Y. Chen, S. Zhang, F. Liu, C. Wu, K. Guo, and Z. Qi, “DVHN: A deep hashing framework for large-scale vehicle re-identification,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 9, pp. 9268–9280, 2023.
  14. P. Khorramshahi, V. Shenoy, and R. Chellappa, “Robust and scalable vehicle re-identification via self-supervision,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023, pp. 5295–5304.
  15. G. Yang, P. Wang, W. Han, S. Chen, S. Zhang, and Y. Yuan, “Automatic generation of fine-grained traffic load spectrum via fusion of weigh-in-motion and vehicle spatial-temporal information,” COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, vol. 37, no. 4, pp. 485–499, MAR. 2022.
  16. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv 1409.1556, SEP. 2014.
  17. X. Liu, W. Liu, H. Ma, and H. Fu, “Large-scale vehicle re-identification in urban surveillance videos,” in 2016 IEEE International Conference on Multimedia and Expo (ICME), 2016, pp. 1–6.

Summary

We haven't generated a summary for this paper yet.