Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 88 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 73 tok/s Pro
GPT OSS 120B 464 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

Learnersourcing in the Age of AI: Student, Educator and Machine Partnerships for Content Creation (2306.06386v1)

Published 10 Jun 2023 in cs.HC and cs.AI

Abstract: Engaging students in creating novel content, also referred to as learnersourcing, is increasingly recognised as an effective approach to promoting higher-order learning, deeply engaging students with course material and developing large repositories of content suitable for personalized learning. Despite these benefits, some common concerns and criticisms are associated with learnersourcing (e.g., the quality of resources created by students, challenges in incentivising engagement and lack of availability of reliable learnersourcing systems), which have limited its adoption. This paper presents a framework that considers the existing learnersourcing literature, the latest insights from the learning sciences and advances in AI to offer promising future directions for developing learnersourcing systems. The framework is designed around important questions and human-AI partnerships relating to four key aspects: (1) creating novel content, (2) evaluating the quality of the created content, (3) utilising learnersourced contributions of students and (4) enabling instructors to support students in the learnersourcing process. We then present two comprehensive case studies that illustrate the application of the proposed framework in relation to two existing popular learnersourcing systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (165)
  1. Learner models for learnersourced adaptive educational systems. Ph.D. thesis. The University of Queensland.
  2. Modelling learners in crowdsourcing educational systems, in: International Conference on Artificial Intelligence in Education, Springer. pp. 3–9.
  3. Open learner models for multi-activity educational systems, in: International Conference on Artificial Intelligence in Education, Springer. pp. 11–17.
  4. Evaluating the quality of learning resources: A learnersourcing approach. IEEE Transactions on Learning Technologies 14, 81–92. doi:10.1109/TLT.2021.3058644.
  5. A multivariate elo-based learner model for adaptive educational systems, in: Proc. Educational Data Mining Conf., pp. 462–467.
  6. Complementing educational recommender systems with open learner models, in: Proc. 10th Int. Conf. Learning Analytics Knowledge, pp. 360–365.
  7. Researching feedback dialogue: An interactional analysis approach. Assessment & Evaluation in Higher Education 42, 252–265.
  8. Example-tracing tutors: Intelligent tutor development for non-programmers. International Journal of Artificial Intelligence in Education 26, 224–269.
  9. My kind of people? perceptions about wikipedia contributors and their motivations, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. p. 3411–3420. URL: https://doi.org/10.1145/1978942.1979451, doi:10.1145/1978942.1979451.
  10. Constructivism learning theory: A paradigm for teaching and learning. Journal of Research & Method in Education 5, 66–70.
  11. Assessing the quality of a student-generated question repository. Physical Review Special Topics-Physics Education Research 10, 020105.
  12. Soylent: a word processor with a crowd inside, in: Proceedings of the 23nd annual ACM symposium on User interface software and technology, pp. 313–322.
  13. Learnersourcing quality assessment of explanations for peer instruction, in: Alario-Hoyos, C., Rodríguez-Triana, M.J., Scheffel, M., Arnedillo-Sánchez, I., Dennerlein, S.M. (Eds.), Addressing Global Challenges and Quality Education, Springer International Publishing, Cham. pp. 144–157.
  14. Inducing self-explanation: A meta-analysis. Educational Psychology Review 30, 703–725.
  15. On the opportunities and risks of foundation models. URL: https://arxiv.org/abs/2108.07258, doi:10.48550/ARXIV.2108.07258.
  16. To flip or not to flip? a meta-analysis of the efficacy of flipped learning in higher education. Review of Educational Research 91, 878–918. URL: https://doi.org/10.3102/00346543211019122, doi:10.3102/00346543211019122, arXiv:https://doi.org/10.3102/00346543211019122.
  17. There are open learner models about! IEEE Transactions on Learning Technologies .
  18. Juxtapeer: Comparative peer review yields higher quality feedback and promotes deeper reflection, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1--13.
  19. Trust, distrust and their impact on assessment reform. Assessment & Evaluation in Higher Education 34, 79--89.
  20. Feedback loops and the longer-term: towards feedback spirals. Assessment & Evaluation in Higher Education 44, 705--714.
  21. Developing students’ capacities for evaluative judgement through analysing exemplars, in: Developing evaluative judgement in Higher Education. Routledge, pp. 108--116.
  22. The zone of proximal development in Vygotsky’s analysis of learning and instruction. Cambridge University Press. Learning in Doing: Social, Cognitive and Computational Perspectives, pp. 39--64. doi:10.1017/CBO9780511840975.004.
  23. Gpt-3 and instructgpt: technological dystopianism, utopianism, and “contextual” perspectives in ai ethics and industry. AI and Ethics 3, 53--64.
  24. Revolt: Collaborative crowdsourcing for labeling machine learning datasets, in: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. p. 2334–2346. URL: https://doi.org/10.1145/3025453.3026044, doi:10.1145/3025453.3026044.
  25. Do rewards reinforce the growth mindset?: Joint effects of the growth mindset and incentive schemes in a field intervention. Journal of experimental psychology: General 146, 1402.
  26. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 .
  27. Algosolve: Supporting subgoal learning in algorithmic problem-solving with learnersourced microtasks, in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. URL: https://doi.org/10.1145/3491102.3501917, doi:10.1145/3491102.3501917.
  28. The contributing student: A pedagogy for flexible learning. Computers in the Schools 19, 207--220.
  29. Cognitive operations and the generation effect. Journal of Experimental Psychology: Learning, Memory, and Cognition 15, 669.
  30. Incorporating training, self-monitoring and ai-assistance to improve peer feedback quality, in: Proceedings of the Ninth ACM Conference on Learning@ Scale, pp. 35--47.
  31. Assessing the quality of student-generated content at scale: A comparative analysis of peer review models. IEEE Transactions on Learning Technologies .
  32. Employing peer review to evaluate the quality of student generated content at scale: A trust propagation approach, in: Proceedings of the Eighth ACM Conference on Learning@ Scale, pp. 139--150.
  33. Incorporating ai and learning analytics to build trustworthy peer assessment systems. British Journal of Educational Technology .
  34. The effect of virtual achievements on student engagement, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. p. 763–772. URL: https://doi.org/10.1145/2470654.2470763, doi:10.1145/2470654.2470763.
  35. Generating practice questions as a preparation strategy for introductory programming exams, in: Proceedings of the 46th ACM Technical Symposium on Computer Science Education, Association for Computing Machinery, New York, NY, USA. p. 278–283. URL: https://doi.org/10.1145/2676723.2677253, doi:10.1145/2676723.2677253.
  36. A case study of multi-institutional contributing-student pedagogy. Computer Science Education 22, 389--411. URL: https://doi.org/10.1080/08993408.2012.727712, doi:10.1080/08993408.2012.727712, arXiv:https://doi.org/10.1080/08993408.2012.727712.
  37. The peerwise system of student contributed assessment questions, in: Proceedings of the Tenth Conference on Australasian Computing Education - Volume 78, Australian Computer Society, Inc., AUS. p. 69–74.
  38. Quality of student contributed questions using peerwise, in: Proceedings of the Eleventh Australasian Conference on Computing Education-Volume 95, pp. 55--63.
  39. Codewrite: Supporting student-driven practice of java, in: Proceedings of the 42nd ACM Technical Symposium on Computer Science Education, Association for Computing Machinery, New York, NY, USA. p. 471–476. URL: https://doi.org/10.1145/1953163.1953299, doi:10.1145/1953163.1953299.
  40. Empirical support for a causal relationship between gamification and learning outcomes, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. p. 1–13. URL: https://doi.org/10.1145/3173574.3173885, doi:10.1145/3173574.3173885.
  41. Computing education in the era of generative ai. arXiv:2306.02608.
  42. Robosourcing educational resources--leveraging large language models for learnersourcing, in: Proceedings of the first annual workshop on Learnersourcing: Student-generated Content @ Scale.
  43. Domain-specific knowledge and task characteristics in decision making. Organizational Behavior and Human Decision Processes 64, 294--306.
  44. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 .
  45. Processing strategies and the generation effect: Implications for making a better reader. Memory & cognition 32, 945--955.
  46. Automatic question generation approaches and evaluation techniques. Current Science , 1683--1691.
  47. Toward a learning science for complex crowdsourcing tasks, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2623--2634.
  48. Crowdsourcing and education: Towards a theory and praxis of learnersourcing, in: Mavrikis, M., Porayska-Pomsta, K. (Eds.), Rethinking learning in the digital age: Making the Learning Sciences count - Proceedings of the 13th International Conference of the Learning Sciences, ICLS 2018, London, UK, June 23-27, 2018, International Society of the Learning Sciences. URL: https://repository.isls.org/handle/1/603.
  49. Assessment co-creation: an exploratory analysis of opportunities and challenges based on student and instructor perspectives. Teaching in Higher Education 24, 739--754. doi:10.1080/13562517.2018.1498077.
  50. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. URL: https://arxiv.org/abs/2112.15594, doi:10.48550/ARXIV.2112.15594.
  51. The dunning--kruger effect: On being ignorant of one’s own ignorance, in: Advances in experimental social psychology. Elsevier. volume 44, pp. 247--296.
  52. Doing it for themselves: students creating a high quality peer-learning environment. Chemistry Education Research and Practice 16, 82--92.
  53. Incentivizing evaluation with peer prediction and limited access to ground truth. Artificial Intelligence 275, 618--638.
  54. Expertiza: Students helping to write an OOD text, in: Companion to the 21st ACM SIGPLAN Symposium on Object-Oriented Programming Systems, Languages, and Applications, Association for Computing Machinery, New York, NY, USA. p. 901–906. URL: https://doi.org/10.1145/1176617.1176742, doi:10.1145/1176617.1176742.
  55. Crowdsourcing information systems--a systems theory perspective, in: 22nd Australasian Conference on Information Systems.
  56. Learnersourcing personalized hints, in: Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing, pp. 1626--1636.
  57. Learnersourcing at scale to overcome expert blind spots for introductory programming: a three-year deployment study on the python tutor website, in: Proceedings of the Seventh ACM Conference on Learning@ Scale, pp. 301--304.
  58. Supporting peer evaluation of student-generated content: a study of three approaches. Assessment & Evaluation in Higher Education 0, 1--19. doi:10.1080/02602938.2021.2006140.
  59. The effects of rubrics on evaluative judgement: a randomised controlled experiment. Assessment & Evaluation in Higher Education 47, 126--143.
  60. Some experiences with the "contributing student approach", in: Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, Association for Computing Machinery, New York, NY, USA. p. 68–72. URL: https://doi.org/10.1145/1140124.1140145, doi:10.1145/1140124.1140145.
  61. Contributing student pedagogy. SIGCSE Bull. 40, 194–212. URL: https://doi.org/10.1145/1473195.1473242, doi:10.1145/1473195.1473242.
  62. A method of automatic grade calibration in peer assessment, in: Proceedings of the 7th Australasian conference on Computing education-Volume 42, pp. 67--72.
  63. Contributing student pedagogy.
  64. Student-generated content: Enhancing learning through sharing multiple-choice questions. International Journal of Science Education 36, 2180--2194. doi:10.1080/09500693.2014.916831.
  65. The assistments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education 24, 470--497.
  66. Crowdsourcing content creation in the classroom. Journal of Computing in Higher Education 27, 47--67.
  67. Scalable science education via online cooperative questioning. CBE—Life Sciences Education 21, ar4. URL: https://doi.org/10.1187/cbe.19-11-0249, doi:10.1187/cbe.19-11-0249, arXiv:https://doi.org/10.1187/cbe.19-11-0249.
  68. Selecting student-authored questions for summative assessments. Research in Learning Technology 29, 1--25. URL: https://journal.alt.ac.uk/index.php/rlt/article/view/2517, doi:10.25304/rlt.v29.2517.
  69. Ten years of computer-supported collaborative learning: A meta-analysis of cscl in stem education during 2005–2014. Educational Research Review 28, 100284. URL: https://www.sciencedirect.com/science/article/pii/S1747938X19302775, doi:https://doi.org/10.1016/j.edurev.2019.100284.
  70. Qascore—an unsupervised unreferenced metric for the question generation evaluation. Entropy 24, 1514.
  71. A review on crowdsourcing for education: State of the art of literature and practice. PACIS , 180.
  72. Solvedeep: A system for supporting subgoal learning in online math problem solving, in: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1--6.
  73. Learnersourcing subgoal hierarchies of code examples, in: Proceedings of the first annual workshop on Learnersourcing: Student-generated Content @ Scale.
  74. What constitutes an ‘actionable insight’in learning analytics? Journal of Learning Analytics 5, 198--221.
  75. Social learning analytics in computer-supported collaborative learning environments: A systematic review of empirical studies. Computers and Education Open , 100073.
  76. Enhancing the quality of peer review by reducing student “free riding”: Peer assessment with positive interdependence. British Journal of Educational Technology 44, 112--124.
  77. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences 103, 102274. URL: https://www.sciencedirect.com/science/article/pii/S1041608023000195, doi:https://doi.org/10.1016/j.lindif.2023.102274.
  78. Student use of peerwise: A multi-institutional, multidisciplinary evaluation. British Journal of Educational Technology 51, 23--35. URL: https://bera-journals.onlinelibrary.wiley.com/doi/abs/10.1111/bjet.12754, doi:https://doi.org/10.1111/bjet.12754.
  79. Generation and retrieval practice effects in the classroom using peerwise. Teaching of Psychology 46, 121--126. URL: https://doi.org/10.1177/0098628319834174, doi:10.1177/0098628319834174, arXiv:https://doi.org/10.1177/0098628319834174.
  80. Completing a crowdsourcing task instead of an assignment; what do university students think?, in: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1--8.
  81. Riple: Recommendation in peer-learning environments based on knowledge gaps and interests. JEDM| Journal of Educational Data Mining 9, 42--67.
  82. Charting the design and analytics agenda of learnersourcing systems, in: LAK21: 11th International Learning Analytics and Knowledge Conference, pp. 32--42.
  83. Bridging the gap between theory and empirical research in evaluative judgment. Journal of Learning Analytics 8, 117--132.
  84. Ripple: A crowdsourced adaptive platform for recommendation of learning activities. Journal of Learning Analytics 6, 91–105. URL: https://learning-analytics.info/index.php/JLA/article/view/6373, doi:10.18608/jla.2019.63.12.
  85. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence .
  86. Learnersourcing modular and dynamic multiple choice questions, in: Learnersourcing: Student-Generated Content @ Scale, Springer.
  87. Learnersourcing: improving learning with collective learner activity. Ph.D. thesis. Massachusetts Institute of Technology.
  88. Data-driven interaction techniques for improving navigation of educational videos, in: Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 563--572.
  89. Learnersourcing subgoal labeling to support learning from how-to videos, in: CHI 2013, Association for Computing Machinery, New York, NY, USA. p. 685–690. URL: https://doi.org/10.1145/2468356.2468477, doi:10.1145/2468356.2468477.
  90. Does the generation effect occur for pictures? The American journal of psychology 113, 95.
  91. New potentials for data-driven intelligent tutoring system development and optimization. AI Magazine 34, 27--41.
  92. A revision of Bloom’s taxonomy: An overview. Theory Into Practice 41, 212--218. doi:10.1207/s15430421tip4104_2.
  93. A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education 30, 121--204.
  94. Analytics of learning tactics and strategies in a learnersourcing environment. Journal of Computer Assisted Learning .
  95. Effects of technological interventions for self-regulation: A control experiment in learnersourcing, in: LAK22: 12th International Learning Analytics and Knowledge Conference, pp. 542--548.
  96. Question Generation Workflow: Incorporating Student-generated content and Peer Evaluation. Ph.D. thesis. Massachusetts Institute of Technology.
  97. Computer supported collaborative learning: A review. The JHGI Giesbers reports on education 10, 1999.
  98. Comparing code explanations created by students and large language models, in: Proceedings of the 28th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1, Association for Computing Machinery, New York, NY, USA. URL: https://doi.org/10.1145/3587102.3588785, doi:10.1145/3587102.3588785.
  99. Crowdsourcing content creation for sql practice, in: Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, Association for Computing Machinery, New York, NY, USA. p. 349–355. URL: https://doi.org/10.1145/3341525.3387385, doi:10.1145/3341525.3387385.
  100. Assessor or assessee: How student learning improves by giving and receiving peer feedback. British journal of educational technology 41, 525--536.
  101. Learning to incentivize: eliciting effort via output agreement, in: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 3782--3788.
  102. To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of second language writing 18, 30--43.
  103. Using computer-based technology to improve feedback to staff and students on mcq assessments. Innovations in Education and Teaching International 51, 510--522.
  104. Analytics of learning strategies: The association with the personality traits, in: Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, pp. 151--160.
  105. A systematic review of empirical studies on learning analytics dashboards: A self-regulated learning perspective. IEEE Transactions on Learning Technologies 13, 226--245.
  106. Assessing the quality of mathematics questions using student confidence scores, in: Wang, J., Lamb, A., Saveliev, E., Cameron, P., et al. (Eds.), Winning contributions for task 3 of the NeurIPS 2020 Education Challenge. URL: https://dqanonymousdata.blob.core.windows.net/neurips-public/papers/mcbroom-paassen/neurips_edu_2020.pdf.
  107. Learnersourcing of complex assessments, in: L@S 2015, Association for Computing Machinery, New York, NY, USA. p. 317–320. URL: https://doi.org/10.1145/2724660.2728683, doi:10.1145/2724660.2728683.
  108. Who writes tomorrow’s learning activities? exploring community college student participation in learnersourcing, in: Proceedings of the 17th International Conference of the Learning Sciences, International Society of the Learning Sciences.
  109. Participation and success with optional self-explanation for students in online undergraduate chemistry courses, in: Proceedings of the 16th International Conference of the Learning Sciences, International Society of the Learning Sciences. pp. 1381--1384. URL: https://dev.stamper.org/publications/MooreShortICLS2022.pdf.
  110. Assessing the quality of student-generated short answer questions using gpt-3, in: Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption: 17th European Conference on Technology Enhanced Learning, EC-TEL, Springer. pp. 243--257.
  111. Evaluating crowdsourcing and topic modeling in generating knowledge components from explanations, in: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (Eds.), Artificial Intelligence in Education, Springer International Publishing, Cham. pp. 398--410.
  112. Examining the effects of student participation and performance on the quality of learnersourcing multiple-choice questions, in: Proceedings of the Eighth ACM Conference on Learning@ Scale, pp. 209--220.
  113. Leveraging students to generate skill tags that inform learning analytics, in: Proceedings of the 16th International Conference of the Learning Sciences, pp. 791--798.
  114. Assessing the quality of multiple-choice questions using gpt-4 and rule-based approaches, in: Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption: 18th European Conference on Technology Enhanced Learning, EC-TEL, Springer.
  115. A human-centered approach to data driven iterative course improvement, in: Cross Reality and Data Science in Engineering: Proceedings of the 17th International Conference on Remote Engineering and Virtual Instrumentation 17, Springer. pp. 742--761.
  116. Learnersourcing: Student-generated content at scale, in: Proceedings of the Ninth ACM Conference on Learning@ Scale, pp. 259--262.
  117. Nationality and Gender Biases in Multicultural Online Learning Environments: The Effects of Anonymity. Association for Computing Machinery, New York, NY, USA. p. 1–14. URL: https://doi.org/10.1145/3313831.3376283.
  118. Expert blind spot: When content knowledge eclipses pedagogical content knowledge, in: Proceedings of the third international conference on cognitive science.
  119. A study of suggestions in opinionated texts and their automatic detection, in: Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pp. 170--178.
  120. Deepqr: Neural-based quality ratings for learnersourced multiple-choice questions. arXiv preprint arXiv:2111.10058 .
  121. Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education 39, 102--122. doi:10.1080/02602938.2013.795518.
  122. OpenAI, 2023. Gpt-4 technical report. arXiv:2303.08774.
  123. Comparative judgement and the hierarchy of students’ choice criteria. International Journal of Mathematical Education in Science and Technology , 1--21.
  124. Constructing interpretative views of learners’ interaction behavior in an open learner model. IEEE Transactions on Learning Technologies , 201--214.
  125. Crowdsourcing programming assignments with crowdsorcerer, in: Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, Association for Computing Machinery, New York, NY, USA. p. 326–331. URL: https://doi.org/10.1145/3197091.3197117, doi:10.1145/3197091.3197117.
  126. Can students review their peers? comparison of peer and instructor reviews, in: Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol 1.
  127. Mainstreaming open textbooks: Educator perspectives on the impact of openstax college open textbooks. International Review of Research in Open and Distributed Learning 16.
  128. Raising student engagement using digital nudges tailored to students’ motivation and perceived ability levels. British Journal of Educational Technology 54, 554--580.
  129. Peer review: A strategy to improve students’ academic essay writings. English Franca: Academic Journal of English Language and Education 1, 45--60.
  130. Mentor academy: Engaging global learners in the creation of data science problems for MOOCs, International Society of the Learning Sciences, Inc.[ISLS].
  131. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 .
  132. Social Constructivism---Jerome Bruner. Springer International Publishing, Cham. pp. 259--275. URL: https://doi.org/10.1007/978-3-030-43620-9_18, doi:10.1007/978-3-030-43620-9_18.
  133. Two peers are better than one: aggregating peer reviews for computing assignments is surprisingly accurate, in: Proceedings of the ACM 2009 international conference on Supporting group work, pp. 115--124.
  134. Positive impact of multiple-choice question authoring and regular quiz participation on student learning. CBE—Life Sciences Education 19, ar16. URL: https://doi.org/10.1187/cbe.19-09-0189, doi:10.1187/cbe.19-09-0189, arXiv:https://doi.org/10.1187/cbe.19-09-0189. pMID: 32357094.
  135. When generating answers benefits arithmetic skill: the importance of prior knowledge. Journal of Experimental Child Psychology 101, 75--81.
  136. Computer-supported collaborative learning in higher education, in: Computer-supported collaborative learning in higher education. IGI Global, pp. 1--18.
  137. How many crowd workers do i need? on statistical power when crowdsourcing relevance judgments. ACM Trans. Inf. Syst. URL: https://doi.org/10.1145/3597201, doi:10.1145/3597201.
  138. Automatic generation of programming exercises and code explanations using large language models, in: Proceedings of the 2022 ACM Conference on International Computing Education Research V.1 (ICER 2022), ACM. URL: https://icer2022.acm.org/, doi:10.1145/3501385.3543957. aCM Conference on International Computing Education Research, ICER ; Conference date: 07-08-2022 Through 11-08-2022.
  139. Generation effect, structuring and computer commands. Behaviour & Information Technology 1, 401--410.
  140. The good, the bad and the ugly: Why crowdsourcing needs ethics, in: 2013 International Conference on Cloud and Green Computing, pp. 531--535. doi:10.1109/CGC.2013.89.
  141. Learnersourcing in theory and practice: Synthesizing the literature and charting the future, in: Proceedings of the Ninth ACM Conference on Learning@ Scale, pp. 234--245.
  142. What’s in it for the learners? evidence from a randomized field experiment on learnersourcing questions in a MOOC, in: Proceedings of the Eighth ACM Conference on Learning @ Scale, Association for Computing Machinery, New York, NY, USA. p. 221–233. URL: https://doi.org/10.1145/3430895.3460142, doi:10.1145/3430895.3460142.
  143. The generation effect: Delineation of a phenomenon. Journal of Experimental Psychology: Human Learning and Memory 4, 592–604. URL: https://doi.org/10.1037/0278-7393.4.6.592, doi:10.1037/0278-7393.4.6.592.
  144. A discursive question: Supporting student-authored multiple choice questions through peer-learning software in non-stemm disciplines. British Journal of Educational Technology 50, 1815--1830. doi:https://doi.org/10.1111/bjet.12686.
  145. Student-generated content: an approach to harnessing the power of diversity in higher education. Teaching in Higher Education 22, 604--618. URL: https://doi.org/10.1080/13562517.2016.1273205, doi:10.1080/13562517.2016.1273205, arXiv:https://doi.org/10.1080/13562517.2016.1273205.
  146. Responding to student writing. College composition and communication 33, 148--156.
  147. Crowdsourcing for assessment items to support adaptive learning. Medical teacher 40, 838--841.
  148. Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76, 467--481.
  149. The state of the art in peer review. FEMS Microbiology letters 365, fny204.
  150. A model of the self-explanation effect. The journal of the learning sciences 2, 1--59.
  151. Formative student-authored question bank: perceptions, question quality and association with summative performance. Postgraduate Medical Journal 94, 97--103. doi:10.1136/postgradmedj-2017-135018.
  152. Optimal spot-checking for improving the evaluation quality of crowdsourcing: Application to peer grading systems. IEEE Transactions on Computational Social Systems 7, 940--955.
  153. Upgrade: Sourcing student open-ended solutions to create scalable learning opportunities, in: L@S 2019, Association for Computing Machinery, New York, NY, USA. URL: https://doi.org/10.1145/3330430.3333614, doi:10.1145/3330430.3333614.
  154. Results and insights from diagnostic questions: The neurips 2020 education challenge, in: NeurIPS 2020 Competition and Demonstration Track, PMLR. pp. 191--205.
  155. Towards Blooms taxonomy classification without labels, in: International Conference on Artificial Intelligence in Education, Springer. pp. 433--445.
  156. Towards human-like educational question generation with large language models, in: Artificial Intelligence in Education: 23rd International Conference, AIED 2022, Durham, UK, July 27--31, 2022, Proceedings, Part I, Springer. pp. 153--166.
  157. Learnersourcing subgoal labels for how-to videos, in: Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pp. 405--416.
  158. The good, the bad and the wiki: Evaluating student-generated content for collaborative learning. British Journal of Educational Technology 39, 987--995. doi:https://doi.org/10.1111/j.1467-8535.2007.00799.x.
  159. Open educational resources: A review of the literature. Handbook of research on educational communications and technology , 781--789.
  160. Axis: Generating explanations at scale with learnersourcing and machine learning, in: Proceedings of the Third (2016) ACM Conference on Learning@ Scale, pp. 379--388.
  161. Introduction to IJAIED Special Issue, FATE in AIED.
  162. Breaking the cycle of mistrust: Wise interventions to provide critical feedback across the racial divide. Journal of Experimental Psychology: General 143, 804.
  163. Qmaps: Engaging students in voluntary question generation and linking, in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1--14.
  164. Ethical issues of crowdsourcing in education. Journal of Responsible Technology 2, 100004.
  165. Truth inference in crowdsourcing: Is the problem solved? Proceedings of the VLDB Endowment 10, 541--552.
Citations (18)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.