ANNA: A Deep Learning Based Dataset in Heterogeneous Traffic for Autonomous Vehicles (2401.11358v1)
Abstract: Recent breakthroughs in artificial intelligence offer tremendous promise for the development of self-driving applications. Deep Neural Networks, in particular, are being utilized to support the operation of semi-autonomous cars through object identification and semantic segmentation. To assess the inadequacy of the current dataset in the context of autonomous and semi-autonomous cars, we created a new dataset named ANNA. This study discusses a custom-built dataset that includes some unidentified vehicles in the perspective of Bangladesh, which are not included in the existing dataset. A dataset validity check was performed by evaluating models using the Intersection Over Union (IOU) metric. The results demonstrated that the model trained on our custom dataset was more precise and efficient than the models trained on the KITTI or COCO dataset concerning Bangladeshi traffic. The research presented in this paper also emphasizes the importance of developing accurate and efficient object detection algorithms for the advancement of autonomous vehicles.
- F. H. Khan, A. de Silva, G. Dusek, J. Davis, and A. Pang, “Authoring platform for mobile citizen science apps with client-side ml,” in Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing, ser. CSCW ’21 Companion. New York, NY, USA: Association for Computing Machinery, 2021, p. 89–94. [Online]. Available: https://doi.org/10.1145/3462204.3481743
- Z. Yueming, X. Song, B. Bai, T. Xing, C. Liu, X. Gao, Z. Wang, Y. Wen, H. Liao, G. Zhang, and P. Xu, “2nd place solution for waymo open dataset challenge - real-time 2d object detection,” 06 2021.
- “Waymo open challenges 2021,” https://waymo.com/open/challenges/2021/real-time-2d-prediction/, [Accessed 11-May-2023].
- “Waymo open challenges 2022,” https://waymo.com/open/challenges/2022/3d-camera-only-detection/, [Accessed 11-May-2023].
- R. Chen and H. Mao, “The impact of autopilot on tesla,” BCP Business & Management, vol. 31, pp. 89–95, 11 2022.
- M. Karthi, V. Muthulakshmi, R. Priscilla, P. Praveen, and K. Vanisri, “Evolution of yolo-v5 algorithm for object detection: Automated detection of library books and performace validation of dataset,” in 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), 2021, pp. 1–6.
- A. Benjumea, I. Teeti, F. Cuzzolin, and A. Bradley, “Yolo-z: Improving small object detection in yolov5 for autonomous vehicles,” 11 2021.
- D. Dlužnevskij, P. Stefanovič, and S. Ramanauskaitė, “Investigation of yolov5 efficiency in iphone supported systems,” Baltic Journal of Modern Computing, vol. 9, 01 2021.
- J. Wu, J. Dong, W. Nie, and Z. Ye, “A lightweight yolov5 optimization of coordinate attention,” Applied Sciences, vol. 13, no. 3, 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/3/1746
- I. Kotseruba and J. Tsotsos, “Behavioral research and practical models of drivers’ attention,” 04 2021.
- E. Khatab, A. Onsy, M. Varley, and A. abouelfarag, “Vulnerable objects detection for autonomous driving: A review,” Integration the VLSI Journal, vol. 78, pp. 36–48, 01 2021.
- A. Gupta, L. Guan, and A. Khwaja, “Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues,” Array, vol. 10, p. 100057, 02 2021.
- A. Juyal, S. Sharma, and P. Matta, “Deep learning methods for object detection in autonomous vehicles,” in 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), 06 2021, pp. 751–755.
- H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning-based image recognition for autonomous driving,” IATSS Research, vol. 43, 12 2019.
- Y. Li, F. Cui, X. Xue, and J. C.-W. Chan, “Coarse-to-fine salient object detection based on deep convolutional neural networks,” Signal Processing: Image Communication, vol. 64, pp. 21–32, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596518300985
- H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning-based image recognition for autonomous driving,” IATSS Research, vol. 43, no. 4, pp. 244–252, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0386111219301566
- A. Pidurkar, R. Sadakale, and A. Prakash, “Monocular camera based computer vision system for cost effective autonomous vehicle,” in 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2019, pp. 1–5.
- E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, “A survey on 3d object detection methods for autonomous driving applications,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782–3795, 2019.
- D. Bao and P. Wang, “Vehicle distance detection based on monocular vision,” in 2016 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2016, pp. 187–191.
- M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” 06 2016.
- X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open dataset for autonomous driving and its application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2702–2719, 2020.
- S. Yogamani, C. Hughes, J. Horgan, G. Sistu, S. Chennupati, M. Uricar, S. Milz, M. Simon, K. Amende, C. Witt, H. Rashed, S. Nayak, S. Mansoor, P. Varley, X. Perrotton, D. Odea, and P. Pérez, “Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9307–9317.
- H. Caesar, V. K. R. Bankiti, A. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” 06 2020, pp. 11 618–11 628.
- W. Maddern, G. Pascoe, M. Gadd, D. Barnes, B. Yeomans, and P. Newman, “Real-time kinematic ground truth for the oxford robotcar dataset,” arXiv preprint arXiv:2002.10152, 2020.
- S. Mandal, S. Biswas, V. E. Balas, R. N. Shaw, and A. Ghosh, “Motion prediction for autonomous vehicles from lyft dataset using deep learning,” in 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), 2020, pp. 768–773.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Zitnick, “Microsoft coco: Common objects in context,” 05 2014.
- S. Gupta, I. Ullah, and M. Madden, “Coyote: A dataset of challenging scenarios in visual perception for autonomous vehicles,” 08 2021.
- M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3061–3070.
- A. Gaidon, Q. Wang, Y. Cabon, and E. Vig, “Virtualworlds as proxy for multi-object tracking analysis,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4340–4349.
- A. Dosovitskiy, G. Ros, F. Codevilla, A. López, and V. Koltun, “Carla: An open urban driving simulator,” 11 2017.
- F. Ul Haq, D. Shin, S. Nejati, and L. Briand, “Comparing offline and online testing of deep neural networks: An autonomous car case study,” 11 2019.
- Mahedi Kamal (1 paper)
- Tasnim Fariha (1 paper)
- Afrina Kabir Zinia (1 paper)
- Md. Abu Syed (1 paper)
- Fahim Hasan Khan (6 papers)
- Md. Mahbubur Rahman (2 papers)