Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A General Model for Detecting Learner Engagement: Implementation and Evaluation (2405.04251v1)

Published 7 May 2024 in cs.CV, cs.HC, and cs.LG

Abstract: Considering learner engagement has a mutual benefit for both learners and instructors. Instructors can help learners increase their attention, involvement, motivation, and interest. On the other hand, instructors can improve their instructional performance by evaluating the cumulative results of all learners and upgrading their training programs. This paper proposes a general, lightweight model for selecting and processing features to detect learners' engagement levels while preserving the sequential temporal relationship over time. During training and testing, we analyzed the videos from the publicly available DAiSEE dataset to capture the dynamic essence of learner engagement. We have also proposed an adaptation policy to find new labels that utilize the affective states of this dataset related to education, thereby improving the models' judgment. The suggested model achieves an accuracy of 68.57\% in a specific implementation and outperforms the studied state-of-the-art models detecting learners' engagement levels.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. G. Ring and G. Mathieux, “The key components of quality learning,” in ASTD Techknowledge 2002 Conference, Las Vegas, February 2002.
  2. A. Rovai, “Building sense of community at a distance,” International Review of Research in Open and Distance Learning (IRRODL), vol. 3, no. 1, April 2002. [Online]. Available: http://www.irrodl.org/content/v3.1/rovai.pdf
  3. B. Khan, “Web-based instruction: What is it and why is it?” in Web-based Instruction, B. H. Khan, Ed.   Englewood Cliffs, NJ: Educational Technology Publications, 1997, pp. 5–18.
  4. J. Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, “The faces of engagement: Automatic recognition of student engagement from facial expressions,” IEEE Transactions on Affective Computing, vol. 5, no. 1, pp. 86–98, Jan.-March 2014. [Online]. Available: https://doi.org/10.1109/TAFFC.2014.2316163
  5. O. Mohamad Nezami, M. Dras, L. Hamey, D. Richards, S. Wan, and C. Paris, “Automatic recognition of student engagement using deep learning and facial expression,” in Machine Learning and Knowledge Discovery in Databases, ser. Lecture Notes in Computer Science, U. Brefeld, E. Fromont, A. Hotho, A. Knobbe, M. Maathuis, and C. Robardet, Eds., vol. 11908.   Cham: Springer, 2020. [Online]. Available: https://doi.org/10.1007/978-3-030-46133-1_17
  6. M. Hu, Y. Wei, M. Li, H. Yao, W. Deng, M. Tong, and Q. Liu, “Bimodal learning engagement recognition from videos in the classroom,” Sensors, vol. 22, no. 16, p. 5932, 2022. [Online]. Available: https://doi.org/10.3390/s22165932
  7. A. Gupta, A. D’Cunha, K. Awasthi, and V. Balasubramanian, “DAiSEE: Towards user engagement recognition in the wild,” arXiv preprint, 2016. [Online]. Available: https://doi.org/10.48550/arXiv.1609.01885
  8. A. Kamath, A. Biswas, and V. Balasubramanian, “A crowdsourced approach to student engagement recognition in e-learning environments,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016, pp. 1–9. [Online]. Available: https://doi.org/10.1109/WACV.2016.7477618
  9. NSSE, “National survey of student engagement,” 2013. [Online]. Available: https://nsse.indiana.edu/nsse/about-nsse/conceptual-framework/index.html
  10. G. D. Kuh, “Assessing what really matters to student learning inside the national survey of student engagement,” Change: The Magazine of Higher Learning, vol. 33, no. 3, pp. 10–17, 2001. [Online]. Available: https://doi.org/10.1080/00091380109601795
  11. B. Byrne and R. Guy, “Evaluation of innovative teaching approaches: The moderating effect of student prior experience,” Creative Education, vol. 03, pp. 755–760, 01 2012. [Online]. Available: https://doi.org/10.4236/ce.2012.326113
  12. J. A. Fredricks, P. C. Blumenfeld, and A. H. Paris, “School engagement: Potential of the concept, state of the evidence,” Review of Educational Research, vol. 74, no. 1, pp. 59–109, 2004. [Online]. Available: https://doi.org/10.3102/00346543074001059
  13. M. Ainley, “Connecting with learning: Motivation, affect and cognition in interest processes,” Educational Psychology Review, vol. 18, pp. 391–405, 2006.
  14. M. Yorke and P. Knight, “Self-theories: some implications for teaching and learning in higher education,” Studies in Higher Education, vol. 29, no. 1, pp. 25–37, 2004. [Online]. Available: https://doi.org/10.1080/1234567032000164859
  15. D. M. A. Fazey and J. A. Fazey, “The potential for autonomy in learning: Perceptions of competence, motivation and locus of control in first-year undergraduate students,” Studies in Higher Education, vol. 26, no. 3, pp. 345–361, 2001. [Online]. Available: https://doi.org/10.1080/03075070120076309
  16. G. D. Kuh, J. Kinzie, J. A. Buckley, B. K. Bridges, and J. C. Hayek, “Piecing together the student success puzzle: Research, propositions, and recommendations,” ASHE Higher Education Report, vol. 32, no. 5, pp. 1–182, 2007.
  17. X. Ma, M. Xu, Y. Dong, and Z. Sun, “Automatic student engagement in online learning environment based on neural turing machine,” International Journal of Information and Education Technology, vol. 11, no. 3, pp. 107–111, 2021.
  18. J. E. Groccia, “What is student engagement?” New Directions for Teaching and Learning, vol. 2018, no. 154, pp. 11–20, 2018.
  19. L. Mehta, A. Mustafa, and AKAD, “Prediction and localization of student engagement in the wild,” in Digital Image Computing: Techniques and Applications (DICTA), 2018 International Conference on.   IEEE, 2018. [Online]. Available: https://doi.org/10.1109/DICTA.2018.8615851
  20. J. Liao, Y. Liang, and J. Pan, “Deep facial spatiotemporal network for engagement prediction in online learning,” Applied Intelligence, vol. 51, pp. 6609–6621, 2021. [Online]. Available: https://doi.org/10.1007/s10489-020-02139-8
  21. A. Abedi and S. S. Khan, “Improving state-of-the-art in detecting student engagement with resnet and tcn hybrid network,” in 2021 18th Conference on Robots and Vision (CRV).   Canada: IEEE, 2021, pp. 151–157. [Online]. Available: https://doi.org/10.1109/CRV52889.2021.00028
  22. P. Ma, Y. Wang, J. Shen, S. Petridis, and M. Pantic, “Lip-reading with densely connected temporal convolutional networks,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.   IEEE, 2021, pp. 2857–2866. [Online]. Available: https://doi.org/10.1109/WACV48630.2021.00290
  23. Y. Hu, Z. Jiang, and K. Zhu, “An optimized cnn model for engagement recognition in an e-learning environment,” Applied Sciences, vol. 12, no. 16, p. 8007, 2022. [Online]. Available: https://doi.org/10.3390/app12168007
  24. T. Baltrusaitis and et al., “Openface 2.0: Facial behavior analysis toolkit,” in Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition.   Xi’an, China: IEEE, 2018, pp. 59–66. [Online]. Available: https://doi.org/10.1109/FG.2018.00019
  25. T. Selim, I. Elkabani, and M. A. Abdou, “Students engagement level detection in online e-learning using hybrid efficientnetb7 together with tcn, lstm, and bi-lstm,” IEEE Access, vol. 10, pp. 99 573–99 583, 2022. [Online]. Available: https://doi.org/10.1109/ACCESS.2022.3206779
  26. Y. Chen, J. Zhou, Q. Gao, J. Gao, and W. Zhang, “MDNN: Predicting student engagement via gaze direction and facial expression in collaborative learning,” CMES-Computer Modeling in Engineering & Sciences, vol. 136, no. 1, pp. 381–401, 2023. [Online]. Available: https://doi.org/10.32604/cmes.2023.023234
  27. A. Abedi, C. Thomas, D. Jayagopi, and S. Khan, “Bag of states: A non-sequential approach to video-based engagement measurement,” Research Square, 01 2023. [Online]. Available: https://doi.org/10.21203/rs.3.rs-2518897/v1
  28. C. Thomas, K. P. Sarma, S. S. Gajula, and D. B. Jayagopi, “Automatic prediction of presentation style and student engagement from videos,” Computers and Education: Artificial Intelligence, vol. 3, p. 100079, 2022. [Online]. Available: https://doi.org/10.1016/j.caeai.2022.100079
  29. C. Thomas, S. Purvaj, and D. B. Jayagopi, “Student engagement from video using unsupervised domain adaptation,” in IMPROVE, vol. 1.   Portugal: Science and Technology, 2022, pp. 118–125. [Online]. Available: https://doi.org/10.5220/0010979400003209
  30. S. E. Whang, Y. Roh, H. Song, and J.-G. Lee, “Data collection and quality challenges in deep learning: a data-centric ai perspective,” The VLDB Journal (The International Journal on Very Large Data Bases), vol. 32, pp. 791–813, 2023. [Online]. Available: https://doi.org/10.1007/s00778-022-00775-9
  31. I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee, and et al., “Challenges in representation learning: A report on three machine learning contests,” in International Conference on Neural Information Processing.   Springer: Springer, 2013, pp. 117–124. [Online]. Available: https://doi.org/10.1007/978-3-642-42051-1_16
  32. M. SAMBARE, “Fer-2013, learn facial expressions from an image,” 2018. [Online]. Available: https://www.kaggle.com/datasets/msambare/fer2013
  33. T. Baltrušaitis, M. Mahmoud, and P. Robinson, “Cross-dataset learning and person-specific normalisation for automatic action unit detection,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6.   IEEE, 2015, pp. 1–6.
  34. A. Gudi, “title = Recognizing Semantic Features in Faces using Deep Learning. Thesis, University of Amsterdam, Netherlands, 2014,,” arXiv preprint arXiv:1512.00743, 2015. [Online]. Available: https://doi.org/10.48550/arXiv.1512.00743
  35. E. Correa, “Emotion recognition using dnn with tensorflow,” https://github.com/isseu/emotion-recognition-neural-networks, 2019.
  36. A. Gupta, A. D’Cunha, K. Awasthi, and V. Balasubramanian, “DAiSEE: Dataset for affective states in e-environments,” https://people.iith.ac.in/vineethnb/resources/daisee/index.html, 2016.
  37. P. Rozin and A. B. Cohen, “High frequency of facial expressions corresponding to confusion concentration, and worry in an analysis of naturally occurring facial expressions of americans,” Emotion, vol. 3, pp. 68–75, 2003. [Online]. Available: https://doi.org/10.1037/1528-3542.3.1.68
  38. A. C. Graesser and B. Olde, “How does one know whether a person understands a device? the quality of the questions the person asks when the device breaks down,” Journal of Educational Psychology, vol. 95, pp. 524–536, 2003. [Online]. Available: https://doi.org/10.1037/0022-0663.95.3.524
  39. A. C. Graesser, P. Chipman, B. King, B. McDaniel, and S. D’Mello, “Emotions and learning with autotutor,” in Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work (AIED07), R. Luckin, K. R. Koedinger, and J. Greer, Eds.   Washington, DC: IOS Press, 2007, pp. 554–556.
  40. S. D’Mello and A. Graesser, “Confusion,” in Handbook of Emotions and Education, R. Pekrun and L. Linnenbrink-Garcia, Eds.   New York, NY: Routledge, 2014, pp. 289–310.
  41. J. Lodge, G. Kennedy, L. Lockyer, A. Arguel, and M. Pachman, “Understanding difficulties and resulting confusion in learning: An integrative review,” Frontiers in Education, vol. 3, pp. 1–10, 2018. [Online]. Available: https://doi.org/10.3389/feduc.2018.00049
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets