Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ANNA: A Deep Learning Based Dataset in Heterogeneous Traffic for Autonomous Vehicles (2401.11358v1)

Published 21 Jan 2024 in cs.CV

Abstract: Recent breakthroughs in artificial intelligence offer tremendous promise for the development of self-driving applications. Deep Neural Networks, in particular, are being utilized to support the operation of semi-autonomous cars through object identification and semantic segmentation. To assess the inadequacy of the current dataset in the context of autonomous and semi-autonomous cars, we created a new dataset named ANNA. This study discusses a custom-built dataset that includes some unidentified vehicles in the perspective of Bangladesh, which are not included in the existing dataset. A dataset validity check was performed by evaluating models using the Intersection Over Union (IOU) metric. The results demonstrated that the model trained on our custom dataset was more precise and efficient than the models trained on the KITTI or COCO dataset concerning Bangladeshi traffic. The research presented in this paper also emphasizes the importance of developing accurate and efficient object detection algorithms for the advancement of autonomous vehicles.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. F. H. Khan, A. de Silva, G. Dusek, J. Davis, and A. Pang, “Authoring platform for mobile citizen science apps with client-side ml,” in Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing, ser. CSCW ’21 Companion.   New York, NY, USA: Association for Computing Machinery, 2021, p. 89–94. [Online]. Available: https://doi.org/10.1145/3462204.3481743
  2. Z. Yueming, X. Song, B. Bai, T. Xing, C. Liu, X. Gao, Z. Wang, Y. Wen, H. Liao, G. Zhang, and P. Xu, “2nd place solution for waymo open dataset challenge - real-time 2d object detection,” 06 2021.
  3. “Waymo open challenges 2021,” https://waymo.com/open/challenges/2021/real-time-2d-prediction/, [Accessed 11-May-2023].
  4. “Waymo open challenges 2022,” https://waymo.com/open/challenges/2022/3d-camera-only-detection/, [Accessed 11-May-2023].
  5. R. Chen and H. Mao, “The impact of autopilot on tesla,” BCP Business & Management, vol. 31, pp. 89–95, 11 2022.
  6. M. Karthi, V. Muthulakshmi, R. Priscilla, P. Praveen, and K. Vanisri, “Evolution of yolo-v5 algorithm for object detection: Automated detection of library books and performace validation of dataset,” in 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), 2021, pp. 1–6.
  7. A. Benjumea, I. Teeti, F. Cuzzolin, and A. Bradley, “Yolo-z: Improving small object detection in yolov5 for autonomous vehicles,” 11 2021.
  8. D. Dlužnevskij, P. Stefanovič, and S. Ramanauskaitė, “Investigation of yolov5 efficiency in iphone supported systems,” Baltic Journal of Modern Computing, vol. 9, 01 2021.
  9. J. Wu, J. Dong, W. Nie, and Z. Ye, “A lightweight yolov5 optimization of coordinate attention,” Applied Sciences, vol. 13, no. 3, 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/3/1746
  10. I. Kotseruba and J. Tsotsos, “Behavioral research and practical models of drivers’ attention,” 04 2021.
  11. E. Khatab, A. Onsy, M. Varley, and A. abouelfarag, “Vulnerable objects detection for autonomous driving: A review,” Integration the VLSI Journal, vol. 78, pp. 36–48, 01 2021.
  12. A. Gupta, L. Guan, and A. Khwaja, “Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues,” Array, vol. 10, p. 100057, 02 2021.
  13. A. Juyal, S. Sharma, and P. Matta, “Deep learning methods for object detection in autonomous vehicles,” in 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), 06 2021, pp. 751–755.
  14. H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning-based image recognition for autonomous driving,” IATSS Research, vol. 43, 12 2019.
  15. Y. Li, F. Cui, X. Xue, and J. C.-W. Chan, “Coarse-to-fine salient object detection based on deep convolutional neural networks,” Signal Processing: Image Communication, vol. 64, pp. 21–32, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596518300985
  16. H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning-based image recognition for autonomous driving,” IATSS Research, vol. 43, no. 4, pp. 244–252, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0386111219301566
  17. A. Pidurkar, R. Sadakale, and A. Prakash, “Monocular camera based computer vision system for cost effective autonomous vehicle,” in 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2019, pp. 1–5.
  18. E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, “A survey on 3d object detection methods for autonomous driving applications,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782–3795, 2019.
  19. D. Bao and P. Wang, “Vehicle distance detection based on monocular vision,” in 2016 International Conference on Progress in Informatics and Computing (PIC).   IEEE, 2016, pp. 187–191.
  20. M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” 06 2016.
  21. X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open dataset for autonomous driving and its application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2702–2719, 2020.
  22. S. Yogamani, C. Hughes, J. Horgan, G. Sistu, S. Chennupati, M. Uricar, S. Milz, M. Simon, K. Amende, C. Witt, H. Rashed, S. Nayak, S. Mansoor, P. Varley, X. Perrotton, D. Odea, and P. Pérez, “Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9307–9317.
  23. H. Caesar, V. K. R. Bankiti, A. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” 06 2020, pp. 11 618–11 628.
  24. W. Maddern, G. Pascoe, M. Gadd, D. Barnes, B. Yeomans, and P. Newman, “Real-time kinematic ground truth for the oxford robotcar dataset,” arXiv preprint arXiv:2002.10152, 2020.
  25. S. Mandal, S. Biswas, V. E. Balas, R. N. Shaw, and A. Ghosh, “Motion prediction for autonomous vehicles from lyft dataset using deep learning,” in 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), 2020, pp. 768–773.
  26. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Zitnick, “Microsoft coco: Common objects in context,” 05 2014.
  27. S. Gupta, I. Ullah, and M. Madden, “Coyote: A dataset of challenging scenarios in visual perception for autonomous vehicles,” 08 2021.
  28. M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3061–3070.
  29. A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, “Virtualworlds as proxy for multi-object tracking analysis,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4340–4349.
  30. A. Dosovitskiy, G. Ros, F. Codevilla, A. López, and V. Koltun, “Carla: An open urban driving simulator,” 11 2017.
  31. F. Ul Haq, D. Shin, S. Nejati, and L. Briand, “Comparing offline and online testing of deep neural networks: An autonomous car case study,” 11 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mahedi Kamal (1 paper)
  2. Tasnim Fariha (1 paper)
  3. Afrina Kabir Zinia (1 paper)
  4. Md. Abu Syed (1 paper)
  5. Fahim Hasan Khan (6 papers)
  6. Md. Mahbubur Rahman (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.