Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ConceptThread: Visualizing Threaded Concepts in MOOC Videos (2401.11132v1)

Published 20 Jan 2024 in cs.HC

Abstract: Massive Open Online Courses (MOOCs) platforms are becoming increasingly popular in recent years. Online learners need to watch the whole course video on MOOC platforms to learn the underlying new knowledge, which is often tedious and time-consuming due to the lack of a quick overview of the covered knowledge and their structures. In this paper, we propose ConceptThread, a visual analytics approach to effectively show the concepts and the relations among them to facilitate effective online learning. Specifically, given that the majority of MOOC videos contain slides, we first leverage video processing and speech analysis techniques, including shot recognition, speech recognition and topic modeling, to extract core knowledge concepts and construct the hierarchical and temporal relations among them. Then, by using a metaphor of thread, we present a novel visualization to intuitively display the concepts based on video sequential flow, and enable learners to perform interactive visual exploration of concepts. We conducted a quantitative study, two case studies, and a user study to extensively evaluate ConceptThread. The results demonstrate the effectiveness and usability of ConceptThread in providing online learners with a quick understanding of the knowledge content of MOOC videos.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (68)
  1. L. Breslow, D. E. Pritchard, J. DeBoer, G. S. Stump, A. D. Ho, and D. T. Seaton, “Studying learning in the worldwide classroom research into edx’s first mooc.” Research & Practice in Assessment, vol. 8, pp. 13–25, 2013.
  2. D. T. Seaton, Y. Bergner, I. Chuang, P. Mitros, and D. E. Pritchard, “Who does what in a massive open online course?” Communications of the ACM, vol. 57, no. 4, pp. 58–65, 2014.
  3. M. Ally, “Foundations of educational theory for online learning,” Theory and Practice of Online Learning, vol. 2, pp. 15–44, 2004.
  4. H. Kanuka and T. Anderson, “Online social interchange, discord, and knowledge construction,” Journal of Distance Education, vol. 13, 01 2007.
  5. N.-F. Huang, H.-H. Hsu, S.-C. Chen, C.-A. Lee, Y.-W. Huang, P.-W. Ou, and J.-W. Tzeng, “Videomark: A video-based learning analytic technique for moocs,” in 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA).   IEEE, 2017, pp. 753–757.
  6. C. Liu, J. Kim, and H.-C. Wang, “Conceptscape: Collaborative concept mapping for video learning,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–12.
  7. A. Wu and H. Qu, “Multimodal analysis of video collections: Visual exploration of presentation techniques in ted talks,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 7, pp. 2429–2442, 2018.
  8. P. J. Guo, J. Kim, and R. Rubin, “How video production affects student engagement: An empirical study of mooc videos,” in Proceedings of the first ACM conference on Learning@ scale conference, 2014, pp. 41–50.
  9. K. Chorianopoulos and M. N. Giannakos, “Usability design for video lectures,” in Proceedings of the 11th European conference on Interactive TV and video, 2013, pp. 163–164.
  10. N. G. Haring, C. Lovitt Thomas, M. D. Eaton, and C. L. Hansen, “The fourth r: Research in the classroom,” (No Title), 1978.
  11. R. Prins, L. Avraamidou, and M. Goedhart, “Tell me a story: The use of narrative as a learning tool for natural selection,” Educational Media International, vol. 54, no. 1, pp. 20–33, 2017.
  12. T. A. Van Dijk, “Cognitive processing of literary discourse,” Poetics Today, vol. 1, no. 1/2, pp. 143–159, 1979.
  13. L. Harasim, “Shift happens: Online education as a new paradigm in learning,” The Internet and Higher Education, vol. 3, no. 1-2, pp. 41–61, 2000.
  14. J. Zhao, C. Bhatt, M. Cooper, and D. A. Shamma, “Flexible learning with semantic visual exploration and sequence-based recommendation of mooc videos,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–13.
  15. C. Shi, S. Fu, Q. Chen, and H. Qu, “Vismooc: Visualizing video clickstream data from massive open online courses,” in Proceedings of 2015 IEEE Pacific Visualization Symposium (PacificVis), 2015, pp. 159–166.
  16. S. Fu, J. Zhao, W. Cui, and H. Qu, “Visual analysis of mooc forums with iforum,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 201–210, 2016.
  17. M. Xia, M. Xu, C.-e. Lin, T. Y. Cheng, H. Qu, and X. Ma, “Seqdynamics: Visual analytics for evaluating online problem-solving dynamics,” in Computer Graphics Forum, vol. 39, no. 3.   Wiley Online Library, 2020, pp. 511–522.
  18. X. Kui, N. Liu, Q. Liu, J. Liu, X. Zeng, and C. Zhang, “A survey of visual analytics techniques for online education,” Visual Informatics, 2022.
  19. Q. Chen, X. Yue, X. Plantaz, Y. Chen, C. Shi, T.-C. Pong, and H. Qu, “Viseq: Visual analytics of learning sequence in massive open online courses,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 3, pp. 1622–1636, 2018.
  20. M. Xia, R. P. Velumani, Y. Wang, H. Qu, and X. Ma, “Qlens: Visual analytics of multi-step problem-solving behaviors for improving question design,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, pp. 870–880, 2020.
  21. S. Tsung, H. Wei, H. Li, Y. Wang, M. Xia, and H. Qu, “Blocklens: visual analytics of student coding behaviors in block-based programming environments,” in Proceedings of the Ninth ACM Conference on Learning@ Scale, 2022, pp. 299–303.
  22. Q. Chen, Y. Chen, D. Liu, C. Shi, Y. Wu, and H. Qu, “Peakvizor: Visual analytics of peaks in video clickstreams from massive open online courses,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 10, pp. 2315–2330, 2015.
  23. C.-Y. Sung, X.-Y. Huang, Y. Shen, F.-Y. Cherng, W.-C. Lin, and H.-C. Wang, “Topin: A visual analysis tool for time-anchored comments in online educational videos,” in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2016, pp. 2185–2191.
  24. T. Wu, Y. Yao, Y. Duan, X. Fan, and H. Qu, “Networkseer: Visual analysis for social network in moocs,” in Proceedings of 2016 IEEE Pacific Visualization Symposium (PacificVis).   IEEE, 2016, pp. 194–198.
  25. Y. Zheng, C. Xu, Y. Li, and Y. Su, “Measuring and visualizing group knowledge elaboration in online collaborative discussions,” Journal of Educational Technology & Society, vol. 21, no. 1, pp. 91–103, 2018.
  26. S. Fu, Y. Wang, Y. Yang, Q. Bi, F. Guo, and H. Qu, “Visforum: A visual analysis system for exploring user groups in online forums,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 8, no. 1, pp. 1–21, 2018.
  27. H.-C. Shih and C.-L. Huang, “Content-based multi-functional video retrieval system,” in Proceedings of 2005 Digest of Technical Papers. International Conference on Consumer Electronics.   IEEE, 2005, pp. 383–384.
  28. Y.-N. Chen, Y. Huang, S.-Y. Kong, and L.-S. Lee, “Automatic key term extraction from spoken course lectures using branching entropy and prosodic/semantic features,” in Proceedings of 2010 IEEE Spoken Language Technology Workshop.   IEEE, 2010, pp. 265–270.
  29. V. Balasubramanian, S. G. Doraisamy, and N. K. Kanakarajan, “A multimodal approach for extracting content descriptive metadata from lecture videos,” Journal of Intelligent Information Systems, vol. 46, pp. 121–145, 2016.
  30. S. Zhang, X. Meng, C. Liu, S. Zhao, V. Sehgal, and M. Fjeld, “Scaffomapping: Assisting concept mapping for video learners,” in Human-Computer Interaction–INTERACT 2019.   Springer, 2019, pp. 314–328.
  31. C.-M. Chen and C.-H. Wu, “Effects of different video lecture types on sustained attention, emotion, cognitive load, and learning performance,” Computers & Education, vol. 80, pp. 108–121, 2015.
  32. B. Zhao, S. Lin, X. Luo, S. Xu, and R. Wang, “A novel system for visual navigation of educational videos using multimodal cues,” in Proceedings of the 25th ACM international conference on Multimedia, 2017, pp. 1680–1688.
  33. H. Yang and C. Meinel, “Content based lecture video retrieval using speech and video text information,” IEEE Transactions on Learning Technologies, vol. 7, no. 2, pp. 142–154, 2014.
  34. B. Zhao, S. Xu, S. Lin, X. Luo, and L. Duan, “A new visual navigation system for exploring biomedical open educational resource (oer) videos,” Journal of the American Medical Informatics Association, vol. 23, no. e1, pp. e34–e41, 2016.
  35. J. Wang, J. Wu, A. Cao, Z. Zhou, H. Zhang, and Y. Wu, “Tac-miner: Visual tactic mining for multiple table tennis matches,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 6, pp. 2770–2782, 2021.
  36. P. A. Legg, D. H. Chung, M. L. Parry, M. W. Jones, R. Long, I. W. Griffiths, and M. Chen, “Matchpad: interactive glyph-based visualization for real-time sports performance analysis,” in Computer graphics forum, vol. 31, no. 3pt4.   Wiley Online Library, 2012, pp. 1255–1264.
  37. G. Zhang, Z. Zhu, S. Zhu, R. Liang, and G. Sun, “Towards a better understanding of the role of visualization in online learning: A review,” Visual Informatics, 2022.
  38. B. Preim and K. Lawonn, “A survey of visual analytics for public health,” in Computer Graphics Forum, vol. 39, no. 1.   Wiley Online Library, 2020, pp. 543–580.
  39. M. Du and X. Yuan, “A survey of competitive sports data visualization and visual analysis,” Journal of Visualization, vol. 24, pp. 47–67, 2021.
  40. D. Davis, G. Chen, C. Hauff, and G.-J. Houben, “Gauging mooc learners’ adherence to the designed learning path.” International Educational Data Mining Society, 2016.
  41. H. Zeng, X. Wang, Y. Wang, A. Wu, T.-C. Pong, and H. Qu, “Gesturelens: Visual analysis of gestures in presentation videos,” IEEE Transactions on Visualization and Computer Graphics, 2022.
  42. J. D. Novak, “Concept mapping: A useful tool for science education,” Journal of Research in Science Teaching, vol. 27, no. 10, pp. 937–949, 1990.
  43. J. C. Nesbit and O. O. Adesope, “Learning with concept and knowledge maps: A meta-analysis,” Review of Educational Research, vol. 76, no. 3, pp. 413–448, 2006.
  44. K. Yadav, A. Gandhi, A. Biswas, K. Shrivastava, S. Srivastava, and O. Deshmukh, “Vizig: Anchor points based non-linear navigation and summarization in educational videos,” in Proceedings of the 21st International Conference on Intelligent User Interfaces, 2016, pp. 407–418.
  45. M. Schwab, H. Strobelt, J. Tompkin, C. Fredericks, C. Huff, D. Higgins, A. Strezhnev, M. Komisarchik, G. King, and H. Pfister, “booc. io: An education system with hierarchical concept maps and dynamic non-linear learning plans,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 571–580, 2016.
  46. J. D. Novak and A. J. Cañas, “The theory underlying concept maps and how to construct them,” Florida Institute for Human and Machine Cognition, vol. 1, no. 1, pp. 1–31, 2006.
  47. X. Barrachina, J. A. Conejero, C. Jordan-Lluch, and M. Murillo Arcila, “Design of conceptual maps for massive open online courses (moocs),” In Processings of 9th International Technology, Education and Development Conference (INTED 2015), pp. 2074–2078, 2015.
  48. Z. Anthony, “Speechrecognition,” https://pypi.org/project/SpeechRecognition, February 2022. [Online]. Available: https://pypi.org/project/SpeechRecognition/
  49. R. Smith, D. Antonova, and D.-S. Lee, “Adapting the tesseract open source ocr engine for multilingual ocr,” in Proceedings of the International Workshop on Multilingual OCR, 2009, pp. 1–8.
  50. M. Liao, B. Shi, X. Bai, X. Wang, and W. Liu, “Textboxes: A fast text detector with a single deep neural network,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
  51. T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency, “Openface 2.0: Facial behavior analysis toolkit,” in Proceedings of 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).   IEEE, 2018, pp. 59–66.
  52. R. Mihalcea and P. Tarau, “Textrank: Bringing order into text,” in Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004, pp. 404–411.
  53. X. Wang and A. McCallum, “Topics over time: a non-markov continuous-time model of topical trends,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 424–433.
  54. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” Advances in Neural Information Processing Systems, vol. 26, 2013.
  55. J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24 824–24 837, 2022.
  56. Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,” International Journal of Computer Vision, vol. 40, no. 2, p. 99, 2000.
  57. M. Q. Wang Baldonado, A. Woodruff, and A. Kuchinsky, “Guidelines for using multiple views in information visualization,” in Proceedings of the Working Conference on Advanced Visual Interfaces, 2000, pp. 110–119.
  58. J. D. Karpicke and H. L. Roediger III, “Repeated retrieval during learning is the key to long-term retention,” Journal of Memory and Language, vol. 57, no. 2, pp. 151–162, 2007.
  59. G. Salton and C. Buckley, “Term-weighting approaches in automatic text retrieval,” Information Processing & Management, vol. 24, no. 5, pp. 513–523, 1988.
  60. S. Brin and L. Page, “The anatomy of a large-scale hypertextual web search engine,” Computer Networks and ISDN Systems, vol. 30, no. 1-7, pp. 107–117, 1998.
  61. C. Van Rijsbergen, “Information retrieval: theory and practice,” in Proceedings of the Joint IBM/University of Newcastle upon Tyne Seminar on Data Base Systems, vol. 79, 1979.
  62. J. Leppink, F. Paas, C. P. Van der Vleuten, T. Van Gog, and J. J. Van Merriënboer, “Development of an instrument for measuring different types of cognitive load,” Behavior Research Methods, vol. 45, pp. 1058–1072, 2013.
  63. S. Weibelzahl, A. Paramythis, and J. Masthoff, “Evaluation of adaptive systems,” in Proceedings of the 28th ACM Conference on User Modeling, Adaptation and personalization, 2020, pp. 394–395.
  64. M. Xia, M. Sun, H. Wei, Q. Chen, Y. Wang, L. Shi, H. Qu, and X. Ma, “Peerlens: Peer-inspired interactive learning path planning in online question pool,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–12.
  65. F. Du, C. Plaisant, N. Spring, and B. Shneiderman, “Finding similar people to guide life choices: Challenge, design, and evaluation,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 5498–5544.
  66. R. Wolfinger, “Plug-in architecture and design guidelines for customizable enterprise applications,” in Proceedings of the 23rd ACM SIGPLAN Conference on Object-oriented Programming Systems Languages and Applications, 2008, pp. 893–894.
  67. D. Shahaf, C. Guestrin, and E. Horvitz, “Trains of thought: Generating information maps,” in Proceedings of the 21st International Conference on World Wide Web, 2012, pp. 899–908.
  68. Q. Dong, Z. Tu, H. Liao, Y. Zhang, V. Mahadevan, and S. Soatto, “Visual relationship detection using part-and-sum transformers with composite queries,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3550–3559.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets