Papers
Topics
Authors
Recent
2000 character limit reached

Enhancing Cross-Dataset Performance of Distracted Driving Detection With Score Softmax Classifier And Dynamic Gaussian Smoothing Supervision (2310.05202v4)

Published 8 Oct 2023 in cs.CV

Abstract: Deep neural networks enable real-time monitoring of in-vehicle drivers, facilitating the timely prediction of distractions, fatigue, and potential hazards. This technology is now integral to intelligent transportation systems. Recent research has exposed unreliable cross-dataset driver behavior recognition due to a limited number of data samples and background noise. In this paper, we propose a Score-Softmax classifier, which reduces the model overconfidence by enhancing category independence. Imitating the human scoring process, we designed a two-dimensional dynamic supervisory matrix consisting of one-dimensional Gaussian-smoothed labels. The dynamic loss descent direction and Gaussian smoothing increase the uncertainty of training to prevent the model from falling into noise traps. Furthermore, we introduce a simple and convenient multi-channel information fusion method;it addresses the fusion issue among arbitrary Score-Softmax classification heads. We conducted cross-dataset experiments using the SFDDD, AUCDD, and the 100-Driver datasets, demonstrating that Score-Softmax improves cross-dataset performance without modifying the model architecture. The experiments indicate that the Score-Softmax classifier reduces the interference of background noise, enhancing the robustness of the model. It increases the cross-dataset accuracy by 21.34%, 11.89%, and 18.77% on the three datasets, respectively. The code is publicly available at https://github.com/congduan-HNU/SSoftmax.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. T. A. Dingus, S. G. Klauer, V. L. Neale, A. Petersen, S. E. Lee, J. Sudweeks, M. A. Perez, J. Hankey, D. Ramsey, S. Gupta et al., “The 100-car naturalistic driving study, phase ii-results of the 100-car field experiment,” Dept. Transp., Nat. Highway Traffic Safety Admin, Washington, DC, USA, Tech. Rep. DOT HS 810 593, 2006.
  2. Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver inattention monitoring system for intelligent vehicles: A review,” IEEE Trans. Intell. Transport. Syst., vol. 12, no. 2, pp. 596–614, 2010.
  3. M. Wollmer, C. Blaschke, T. Schindl, B. Schuller, B. Farber, S. Mayer, and B. Trefflich, “Online driver distraction detection using long short-term memory,” IEEE Trans. Intell. Transport. Syst., vol. 12, no. 2, pp. 574–582, 2011.
  4. T. W. H. Organization, “Global status report on road safety,” [Online], https://www.who.int/violence_injury_prevention/road_safety_status/2018/en/ Accessed Jan 15, 2020.
  5. F. C. Commision, “The dangers of distracted driving,” [Online], https://www.fcc.gov/consumers/guides/dangers-texting-while-driving Accessed Jan 15, 2020.
  6. Y. Lu, C. Liu, F. Chang, H. Liu, and H. Huan, “JHPFA-Net: Joint head pose and facial action network for driver yawning detection across arbitrary poses in videos,” IEEE Trans. Intell. Transport. Syst., pp. 1–14, 2023.
  7. X. Li, J. Xia, L. Cao, G. Zhang, and X. Feng, “Driver fatigue detection based on convolutional neural network and face alignment for edge computing device,” Proc. Inst. Mech. Eng. Part D-J. Automob. Eng., vol. 235, no. 10-11, pp. 2699–2711, Sep. 2021.
  8. B. Baheti, S. Talbar, and S. Gajre, “Towards computationally efficient and realtime distracted driver detection with mobilevgg network,” IEEE Trans. Intell. Veh., vol. 5, no. 4, pp. 565–574, Dec. 2020.
  9. B. Qin, J. Qian, Y. Xin, B. Liu, and Y. Dong, “Distracted driver detection based on a cnn with decreasing filter size,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 7, pp. 6922–6933, Jul. 2022.
  10. A. Behera, Z. Wharton, A. Keidel, and B. Debnath, “Deep cnn, body pose, and body-object interaction features for drivers’ activity monitoring,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 3, pp. 2874–2881, Mar. 2022.
  11. Z. Hu, Y. Xing, W. Gu, D. Cao, and C. Lv, “Driver anomaly quantification for intelligent vehicles: A contrastive learning approach with representation clustering,” IEEE Trans. Intell. Veh., vol. 8, no. 1, pp. 37–47, Jan. 2023.
  12. P. Li, Y. Yang, R. Grosu, G. Wang, R. Li, Y. Wu, and Z. Huang, “Driver distraction detection using octave-like convolutional neural network,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 7, pp. 8823–8833, Jul. 2022.
  13. W. Li, J. Wang, T. Ren, F. Li, J. Zhang, and Z. Wu, “Learning accurate, speedy, lightweight cnns via instance-specific multi-teacher knowledge distillation for distracted driver posture identification,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 10, pp. 17 922–17 935, Oct. 2022.
  14. B. Li, J. Chen, Z. Huang, H. Wang, J. Lv, J. Xi, J. Zhang, and Z. Wu, “A new unsupervised deep learning algorithm for fine-grained detection of driver distraction,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 10, pp. 19 272–19 284, Oct. 2022.
  15. D. Liu, T. Yamasaki, Y. Wang, K. Mase, and J. Kato, “Toward extremely lightweight distracted driver recognition with distillation-based neural architecture search and knowledge transfer,” IEEE Trans. Intell. Transport. Syst., vol. 24, no. 1, pp. 764–777, Jan. 2023.
  16. K. Peng, A. Roitberg, K. Yang, J. Zhang, and R. Stiefelhagen, “TransDARC: Transformer-based driver activity recognition with latent space feature calibration,” in IEEE Int. Conf. Intell. Rob. Syst. (IROS).   Kyoto, Japan: IEEE, Oct. 2022, pp. 278–285.
  17. M. Tan, G. Ni, X. Liu, S. Zhang, X. Wu, Y. Wang, and R. Zeng, “Bidirectional posture-appearance interaction network for driver behavior recognition,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 8, pp. 13 242–13 254, Aug. 2022.
  18. J. Kuang, W. Li, F. Li, J. Zhang, and Z. Wu, “MIFI: Multi-camera feature integration for robust 3d distracted driver activity recognition,” IEEE Trans. Intell. Transport. Syst., pp. 1–11, 2023.
  19. H. Mittal and B. Verma, “CAT-CapsNet: A convolutional and attention based capsule network to detect the driver’s distraction,” IEEE Trans. Intell. Transport. Syst., vol. 24, no. 9, pp. 9561–9570, Sep. 2023.
  20. J. Wang, W. Li, F. Li, J. Zhang, Z. Wu, Z. Zhong, and N. Sebe, “100-Driver: A large-scale, diverse dataset for distracted driver classification,” IEEE Trans. Intell. Transport. Syst., vol. 24, no. 7, pp. 7061–7072, Jul. 2023.
  21. Y. Abouelnaga, H. M. Eraqi, and M. N. Moustafa, “Real-time distracted driver posture classification,” in Proc. Conf. Neural Inf. Process. Syst. (NIPS), Montreal, QC, Canada, Dec. 2018, pp. 1–8.
  22. B. Baheti, S. Gajre, and S. Talbar, “Detection of distracted driver using convolutional neural network,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshop (CVPRW).   Salt Lake City, UT, USA: IEEE, Jun. 2018, pp. 1145–11 456.
  23. Y. Xing, C. Lv, H. Wang, D. Cao, E. Velenis, and F.-Y. Wang, “Driver activity recognition for intelligent vehicles: A deep learning approach,” IEEE Trans. Veh. Technol., vol. 68, no. 6, pp. 5379–5390, Jun. 2019.
  24. A. Bera, Z. Wharton, Y. Liu, N. Bessis, and A. Behera, “Attend and Guide (AG-Net): A keypoints-driven attention-based deep network for image recognition,” IEEE Trans. on Image Process., vol. 30, pp. 3691–3704, 2021.
  25. L. Su, C. Sun, D. Cao, and A. Khajepour, “Efficient driver anomaly detection via conditional temporal proposal and classification network,” IEEE Trans. Comput. Soc. Syst., pp. 1–10, 2022.
  26. C. Duan, Y. Gong, J. Liao, M. Zhang, and L. Cao, “FRNet: Dcnn for real-time distracted driving detection toward embedded deployment,” IEEE Trans. Intell. Transport. Syst., vol. 24, no. 9, pp. 9835–9848, Sep. 2023.
  27. State Farm, “Distracted dirver detection competition,” [Online], Accessed: Jan. 15, 2020.https://www.kaggle.com/c/state-farm-distracted-driver-detection.
  28. H. M. Eraqi, Y. Abouelnaga, M. H. Saad, and M. N. Moustafa, “Driver distraction identification with an ensemble of convolutional neural networks,” J. Adv. Transp., vol. 2019, pp. 1–12, Feb. 2019.
  29. J.-C. Chen, C.-Y. Lee, P.-Y. Huang, and C.-R. Lin, “Driver Behavior Analysis via Two-Stream Deep Convolutional Neural Network,” Applied Sciences, vol. 10, no. 6, p. 1908, Mar. 2020.
  30. A. Behera and A. H. Keidel, “Latent Body-Pose guided DenseNet for Recognizing Driver’s Fine-grained Secondary Activities,” in IEEE Int. Conf. Adv. Video Signal-Based Surveill. (AVSS).   Auckland, New Zealand: IEEE, Nov. 2018, pp. 1–6.
  31. F. Zandamela, T. Ratshidaho, F. Nicolls, and G. Stoltz, “Cross-dataset performance evaluation of deep learning distracted driver detection algorithms,” MATEC Web Conf., vol. 370, p. 07002, 2022.
  32. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Feifei, “ImageNet: A large-scale hierarchical image database,” in IEEE Conf. Comput. Vis. Pattern Recogn. (CVPR).   Miami, FL: IEEE, Jun. 2009, pp. 248–255.
  33. H. V. Koay, J. H. Chuah, and C.-O. Chow, “Convolutional neural network or vision transformer? benchmarking various machine learning models for distracted driver detection,” in IEEE Reg. 10 Annu. Int. Conf. Proc. TENCON.   Auckland, New Zealand: IEEE, Dec. 2021, pp. 417–422.
  34. H. V. Koay, J. H. Chuah, C.-O. Chow, Y.-L. Chang, and B. Rudrusamy, “Optimally-weighted image-pose approach (owipa) for distracted driver detection and classification,” Sensors, vol. 21, no. 14, p. 4837, Jul. 2021.
  35. A. Shaout, B. Roytburd, and L. A. Sanchez-Perez, “An embedded deep learning computer vision method for driver distraction detection,” in Int. Arab Conf. Inf. Technol., (ACIT).   Muscat, Oman: IEEE, Dec. 2021, pp. 1–7.
  36. Duy-Linh Nguyen, M. D. Putro, and K.-H. Jo, “Driver behaviors recognizer based on light-weight convolutional neural network architecture and attention mechanism,” IEEE Access, vol. 10, pp. 71 019–71 029, 2022.
  37. M. Leekha, M. Goswami, R. R. Shah, Y. Yin, and R. Zimmermann, “Are You Paying Attention? Detecting Distracted Driving in Real-Time,” in IEEE Int. Conf. Multimed. Big Data, (BigMM).   Singapore, Singapore: IEEE, Sep. 2019, pp. 171–180.
  38. A. Ezzouhri, Z. Charouh, M. Ghogho, and Z. Guennoun, “Robust Deep Learning-Based Driver Distraction Detection and Classification,” IEEE Access, vol. 9, pp. 168 080–168 092, 2021.
  39. A. K. Dey, B. Goel, and S. Chellappan, “Context-driven detection of distracted driving using images from in-car cameras,” Internet Things, vol. 14, Jun. 2021.
  40. J. Wang, Z. Wu, F. Li, and J. Zhang, “A Data Augmentation Approach to Distracted Driving Detection,” Future Internet, vol. 13, no. 1, Jan. 2021.
  41. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, Feb. 2020.
  42. P. Li, M. Lu, Z. Zhang, D. Shan, and Y. Yang, “A Novel Spatial-Temporal Graph for Skeleton-based Driver Action Recognition,” in IEEE Intell. Transp. Syst. Conf. (ITSC).   Auckland, New Zealand: IEEE, Oct. 2019, pp. 3243–3248.
  43. M. Wu, X. Zhang, L. Shen, and H. Yu, “Pose-aware multi-feature fusion network for driver distraction recognition,” in Proc. Int. Conf. Pattern Recognit. (ICPR).   Milan, Italy: IEEE, Jan. 2020, pp. 1228–1235.
  44. S. Masood, A. Rai, A. Aggarwal, M. Doja, and M. Ahmad, “Detecting distraction of drivers using convolutional neural network,” Pattern Recognit. Lett., vol. 139, pp. 79–85, Nov. 2020.
  45. O. D. Okon and L. Meng, “Detecting Distracted Driving with Deep Learning,” in Interactive Collaborative Robotics, A. Ronzhin, G. Rigoll, and R. Meshcheryakov, Eds.   Cham: Springer International Publishing, 2017, vol. 10459, pp. 170–179.
  46. C. Ou and F. Karray, “Enhancing Driver Distraction Recognition Using Generative Adversarial Networks,” IEEE Trans. Intell. Veh., vol. 5, no. 3, p. 12, 2020.
  47. S. Yang, L. Liu, and M. Xu, “Free lunch for few-shot learning: Distribution calibration,” in Int. Conf. Learn. Represent. (ICLR), Virtual, Online, May 2021.
  48. H. Yang, H. Liu, Z. Hu, A.-T. Nguyen, T.-M. Guerra, and C. Lv, “Quantitative identification of driver distraction: A weakly supervised contrastive learning approach,” IEEE Trans. Intell. Transport. Syst., pp. 1–12, 2023.
  49. K. Roy, “Unsupervised Sparse, Nonnegative, Low Rank Dictionary Learning for Detection of Driver Cell Phone Usage,” IEEE Trans. Intell. Transport. Syst., vol. 23, no. 10, pp. 18 200–18 209, Oct. 2022.
  50. N. Kose, O. Kopuklu, A. Unnervik, and G. Rigoll, “Real-time driver state monitoring using a cnn based spatio-temporal approach,” in IEEE Intell. Transp. Syst. Conf. (ITSC).   Auckland, New Zealand: IEEE, Oct. 2019, pp. 3236–3242.
  51. P. Li, M. Lu, Z. Zhang, D. Shan, and Y. Yang, “A novel spatial-temporal graph for skeleton-based driver action recognition,” in IEEE Intell. Transp. Syst. Conf. (ITSC).   Auckland, New Zealand: IEEE, Oct. 2019, pp. 3243–3248.
  52. Y. Hu, M. Lu, and X. Lu, “Feature refinement for image-based driver action recognition via multi-scale attention convolutional neural network,” Signal Process., Image Commun., vol. 81, p. 115697, Feb. 2020.
  53. A. Roitberg, K. Peng, Z. Marinov, C. Seibold, D. Schneider, and R. Stiefelhagen, “A comparative analysis of decision-level fusion for multimodal driver behaviour understanding,” in IEEE Intell. Veh. Symp. Proc. (IV).   Aachen, Germany: IEEE, Jun. 2022, pp. 1438–1444.
  54. R. Müller, S. Kornblith, and G. E. Hinton, “When does label smoothing help?” Adv. neural inf. proces. syst. (NeurIPS), vol. 32, 2019.
  55. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980 [cs], Jan. 2014.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.