Enhancing Student Feedback Using Predictive Models in Visual Literacy Courses (2405.15026v1)
Abstract: Peer review is a popular feedback mechanism in higher education that actively engages students and provides researchers with a means to assess student engagement. However, there is little empirical support for the durability of peer review, particularly when using data predictive modeling to analyze student comments. This study uses Na\"ive Bayes modeling to analyze peer review data obtained from an undergraduate visual literacy course over five years. We expand on the research of Friedman and Rosen and Beasley et al. by focusing on the Na\"ive Bayes model of students' remarks. Our findings highlight the utility of Na\"ive Bayes modeling, particularly in the analysis of student comments based on parts of speech, where nouns emerged as the prominent category. Additionally, when examining students' comments using the visual peer review rubric, the lie factor emerged as the predominant factor. Comparing Na\"ive Bayes model to Beasley's approach, we found both help instructors map directions taken in the class, but the Na\"ive Bayes model provides a more specific outline for forecasting with a more detailed framework for identifying core topics within the course, enhancing the forecasting of educational directions. Through the application of the Holdout Method and $\mathrm{k}$-fold cross-validation with continuity correction, we have validated the model's predictive accuracy, underscoring its effectiveness in offering deep insights into peer review mechanisms. Our study findings suggest that using predictive modeling to assess student comments can provide a new way to better serve the students' classroom comments on their visual peer work. This can benefit courses by inspiring changes to course content, reinforcement of course content, modification of projects, or modifications to the rubric itself.
- A. Friedman and P. Rosen, “Leveraging peer review in visualization education: A proposal for a new model,” arXiv preprint, 2021.
- Z. J. Beasley, A. Friedman, and P. Rosen, “Through the looking glass: Insights into visualization pedagogy through sentiment analysis of peer review text,” IEEE Computer Graphics and Applications, vol. 41, no. 6, pp. 59–70, 2021.
- B. A. Chegu, C. Yun-Tsui, and S.-H. Hsieh, “A review of tertiary bim education for advanced engineering communication with visualization,” Visualization in Engineering, vol. 4, no. 1, pp. 1–17, 2016.
- S. E. Sorour, T. Mine, K. Goda, and S. Hirokawa, “Predicting students’ grades based on free style comments data by artificial neural network,” in 2014 IEEE Frontiers in Education Conference (FIE) Proceedings. IEEE, 2014, pp. 1–9.
- G. Domik, “Who is on my team: Building strong teams in interdisciplinary visualization courses,” in ACM SIGGRAPH ASIA 2009 Educators Program, 2009, pp. 1–7.
- K. Okoye, A. Arrona-Palacios, C. Camacho-Zuñiga, J. A. G. Achem, J. Escamilla, and S. Hosseini, “Towards teaching analytics: a contextual model for analysis of students’ evaluation of teaching through text mining and machine learning classification,” Education and Information Technologies, pp. 1–43, 2022.
- B. G. Trogden, C. Kennedy, and N. K. Biyani, “Mapping and making meaning from undergraduate student engagement in high-impact educational practices,” Innovative Higher Education, vol. 48, no. 1, pp. 145–168, 2023.
- E. E. Firat, A. Joshi, and R. S. Laramee, “Interactive visualization literacy: The state-of-the-art,” Information Visualization, vol. 21, no. 3, pp. 285–310, 2022.
- Z.-J. Liu, V. Levina, and Y. Frolova, “Information visualization in the educational process: Current trends,” International Journal of Emerging Technologies in Learning (iJET), vol. 15, no. 13, pp. 49–62, 2020.
- P. Isenberg, N. Elmqvist, J. Scholtz, D. Cernea, K.-L. Ma, and H. Hagen, “Collaborative visualization: Definition, challenges, and research agenda,” Information Visualization, vol. 10, no. 4, pp. 310–326, 2011.
- X. Chen, D. Zou, H. Xie, G. Cheng, and C. Liu, “Two decades of artificial intelligence in education,” Educational Technology & Society, vol. 25, no. 1, pp. 28–47, 2022.
- I. Khan, A. R. Ahmad, N. Jabeur, and M. N. Mahdi, “An artificial intelligence approach to monitor student performance and devise preventive measures,” Smart Learning Environments, vol. 8, no. 1, pp. 1–18, 2021.
- G. Zingle, B. Radhakrishnan, Y. Xiao, E. Gehringer, Z. Xiao, F. Pramudianto, G. Khurana, and A. Arnav, “Detecting suggestions in peer assessments,” International Educational Data Mining Society, 2019.
- L. Li, X. Chen, H. Ye, Z. Bi, S. Deng, N. Zhang, and H. Chen, “On robustness and bias analysis of bert-based relation extraction,” in Knowledge Graph and Semantic Computing: Knowledge Graph Empowers New Infrastructure Construction: 6th China Conference, CCKS 2021, Guangzhou, China, November 4-7, 2021, Proceedings 6. Springer, 2021, pp. 43–59.
- Y. Xiao, G. Zingle, Q. Jia, H. R. Shah, Y. Zhang, T. Li, M. Karovaliya, W. Zhao, Y. Song, J. Ji et al., “Detecting problem statements in peer assessments,” arXiv preprint, 2020.
- K. H. Brodersen, F. Gallusser, J. Koehler, N. Remy, and S. L. Scott, “Inferring causal impact using bayesian structural time-series models,” The Annals of Applied Statistics, pp. 247–274, 2015.
- F. Razaque, N. Soomro, S. A. Shaikh, S. Soomro, J. A. Samo, N. Kumar, and H. Dharejo, “Using naïve bayes algorithm to students’ bachelor academic performances analysis,” in IEEE International Conference on Engineering Technologies and Applied Sciences (ICETAS), 2017, pp. 1–5.
- S. Wahyuni and M. Marbun, “Implementation of data mining in predicting the study period of student using the naïve bayes algorithm,” IOP Conference Series: Materials Science and Engineering, vol. 769, no. 1, p. 012039, 2020.
- A. Tripathi, S. Yadav, and R. Rajan, “Naive bayes classification model for the student performance prediction,” in International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), vol. 1, 2019, pp. 1548–1553.
- S. Maitra, S. Madan, R. Kandwal, and P. Mahajan, “Mining authentic student feedback for faculty using naïve bayes classifier,” Procedia Computer Science, vol. 132, pp. 1171–1183, 2018.
- D. Todorovic, “Gestalt principles,” Scholarpedia, vol. 3, no. 12, p. 5345, 2008.
- M. Dvir and D. Ben-Zvi, “Informal statistical models and modeling,” Mathematical Thinking and Learning, vol. 25, no. 1, pp. 79–99, 2023.
- D. Meyer, E. Dimitriadou, K. Hornik, A. Weingessel, F. Leisch, C.-C. Chang, C.-C. Lin, and M. D. Meyer, “Package ‘e1071’,” The R Journal, 2019.
- D. K. Dake and E. Gyimah, “Using sentiment analysis to evaluate qualitative students’ responses,” Education and Information Technologies, vol. 28, no. 4, pp. 4629–4647, 2023.
- U. S. Rahmah, “Teaching part of speech and word groups as units of meaning in esp speaking class,” Journal of English for Academic and Specific Purposes (JEASP), vol. 1, no. 1, pp. 54–66, 20118.
- Alon Friedman (6 papers)
- Kevin Hawley (1 paper)
- Paul Rosen (41 papers)
- Md Dilshadur Rahman (5 papers)