Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GPT-3-driven pedagogical agents for training children's curious question-asking skills (2211.14228v6)

Published 25 Nov 2022 in cs.CL and cs.HC

Abstract: In order to train children's ability to ask curiosity-driven questions, previous research has explored designing specific exercises relying on providing semantic and linguistic cues to help formulate such questions. But despite showing pedagogical efficiency, this method is still limited as it relies on generating the said cues by hand, which can be a very costly process. In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a LLM for automating the production of the pedagogical content of a curious question-asking (QA) training. We study generating the said content using the "prompt-based" method that consists of explaining the task to the LLM in natural text. We evaluate the output using human experts annotations and comparisons with hand-generated content. Results suggested indeed the relevance and usefulness of this content. We also conduct a field study in primary school (75 children aged 9-10), where we evaluate children's QA performance when having this training. We compare 3 types of content : 1) hand-generated content that proposes "closed" cues leading to predefined questions; 2) GPT-3-generated content that proposes the same type of cues; 3) GPT-3-generated content that proposes "open" cues leading to several possible questions. We see a similar QA performance between the two "closed" trainings (showing the scalability of the approach using GPT-3), and a better one for participants with the "open" training. These results suggest the efficiency of using LLMs to support children in generating more curious questions, using a natural language prompting approach that affords usability by teachers and other users not specialists of AI techniques. Furthermore, results also show that open-ended content may be more suitable for training curious question-asking skills.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Conversational agents for training curiosity-driven learning in children. International Journal of Human-Computer studies, 167, 2022.
  2. Pedagogical agents for fostering question-asking skills in children. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–13, New York, NY, USA, 2020. Association for Computing Machinery.
  3. Tutoring answer explanation fosters learning with understanding. In Artificial Intelligence in Education, 1999.
  4. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623, 2021.
  5. D. Berlyne. A theory of human curiosity. British journal of psychology, 45(3):180–191, August 1954.
  6. R. Bjork. Creating desirable difficulties to enhance learning. Carmarthen: Crown House Publishing, 2017.
  7. Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, 2021.
  8. Automatic question generation for vocabulary assessment. In HLT, 2005.
  9. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020.
  10. Expression of curiosity in social robots: Design, perception, and effects on behaviour. CHI Conference on Human Factors in Computing Systems (CHI2019), page 1–12, 2019.
  11. D. Cordova and M. Lepper. Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice. Journal of Educational Psychology, 88:715–730, 12 1996.
  12. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
  13. I.-A. N. DIAKIDOY and G. Spanoudis. Domain specificity in creativity testing: A comparison of performance on a general divergent-thinking test and a parallel, content-specific test. The Journal of Creative Behavior, 36(1):41–61, 2002.
  14. M. D. Gall. The use of questions in teaching. Review of educational research, 40(5):707–721, 1970.
  15. A preliminary report on analyses of classroom interaction. Merrill-Palmer Quarterly, 9:183–194, 1963.
  16. Can children catch curiosity from a social robot? Proceedings of the 2015 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15), 2015:91–98, 2015.
  17. L. Goupil and J. Proust. Curiosity as a metacognitive feeling. Cognition, 231:105325, 2023.
  18. Mechanisms that generate questions. In T. W. Lauer, P. E., and A. C. Graesser, editors, Questions and Information Systems, pages 167–187. Lawrence Erlbaum Associates, Mahwah, NJ, 1992.
  19. Question asking during tutoring. American Educational Research Journal, 31:104–137, 1994.
  20. F. Guay. Applying self-determination theory to education: Regulations types, psychological needs, and autonomy supporting behaviors. Canadian Journal of School Psychology, 37(1):75–92, 2022.
  21. Development of nasa-tlx (task load index): Results of empirical and theoretical research. Advances in psychology, 52:139–183, 1988.
  22. On the effectiveness of adapter-based tuning for pretrained language model adaptation, 2021.
  23. J. Humphries and M. Ness. Beyond who, what, where, when, why, and how: Preparing students to generate questions in the age of common core standards. Journal ofResearch in Childhood Education, 29:551–561, 2015.
  24. Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6, 2012.
  25. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962–977, 2021.
  26. Curiosity in schools, pages 243–266. 07 2018.
  27. “i know that now, i’m going to learn this next” promoting self-regulated learning with a robotic tutor. International Journal of Social Robotics, 10:439–454, 2018.
  28. Social metacognition: An expansionist review. Personality and Social Psychology Review, 2(2):137–154, 1998. PMID: 15647141.
  29. Process account of curiosity and interest: A reward-learning perspective. Educational Psychology Review, 31:875–895, Dec. 2019.
  30. The wick in the candle of learning: Epistemic curiosity activates reward circuitry and enhances memory. Psychological science, 20(8):963–973, August 2009.
  31. Curiosity and exploration: Facilitating positive subjective experiences and personal growth opportunities. Journal of Personality Assessment, 82(3):291–305, June 2004.
  32. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274, 2023.
  33. S. Kumar and P. P. Talukdar. Reordering examples helps during priming-based few-shot learning. CoRR, abs/2106.01751, 2021.
  34. Curiosity killed the cat, but makes crowdwork better. In Proceedings of the 2016 ACM CHI Conference on Human Factors in Computing Systems (CHI ’16), pages 4098–4110, 2016.
  35. X. L. Li and P. Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online, Aug. 2021. Association for Computational Linguistics.
  36. J. Litman. Curiosity and the pleasures of learning: Wanting and liking new information. Cognition & Emotion, 19(6):793–814, September 2005.
  37. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online, May 2022. Association for Computational Linguistics.
  38. G. Loewenstein. The Psychology of Curiosity: A Review and Reinterpretation. Psychological Bulletin, 116(1):75–98, 1994.
  39. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086–8098, Dublin, Ireland, May 2022. Association for Computational Linguistics.
  40. Effects of classroom seating arrangements on children’s question-asking. Learning Environments Research, 2:249–263, 1999.
  41. Your liking is my curiosity: a social popularity intervention to induce curiosity. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, pages 756–761, 2018.
  42. Epistemic curiosity and the region of proximal learning. Current Opinion in Behavioral Sciences, 35:40–47, 2020. Curiosity (Explore vs Exploit).
  43. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837, 2022.
  44. Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation, 11(2):265–286, 2007.
  45. T. Post and J. H. Walma van der Molen. Development and validation of a questionnaire to measure primary school children’s images of and attitudes towards curiosity (the ciac questionnaire). Motivation and Emotion, 42,1:159–178, 2015.
  46. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020.
  47. T. Reio and A. Wiswell. Field investigation of the relationship among adult curiosity, workplace learning, and job performance. Human Resource Development Quarterly, 11(1):5–30, March 2000.
  48. Favourable and unfavourable conditions for children’s confidence judgments. British Journal of Developmental Psychology, 25:109–134, 2007.
  49. Learning to retrieve prompts for in-context learning. arXiv preprint arXiv:2112.08633, 2021.
  50. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being, 2000.
  51. M. Scardamalia and C. Bereiter. Text-based and knowledge based questioning by children. Cognition and Instruction, 9:177–199, 1992.
  52. Early childhood curiosity and kindergarten reading and math academic achievement. Pediatric Research, 84:380–386, April 2018.
  53. A. Silvervarg Flycht-Eriksson and A. Jönsson. Towards a conversational pedagogical agent capable of affecting attitudes and self-efficacy. In Proceedings of the Second Workshop on Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity, 2010.
  54. A. Stahl and L. Feigenson. Observing the unexpected enhances infants’ learning and exploration. Science, 348(6230):91–94, April 2015.
  55. I do not understand what i cannot define: Automatic question generation with pedagogically-driven content selection. ArXiv, abs/2110.04123, 2021.
  56. The hungry mind – intellectual curiosity is the third pillar of academic performance. Perspectives on Psychological Science, 6:574–588, 10 2011.
  57. Towards automatic identification of core concepts in educational resources. Proceedings of the ACM/IEEE Joint Conference on Digital Libraries, pages 379–388, 12 2014.
  58. Humans monitor learning progress in curiosity-driven exploration. Nature Communications, 12:5972, 10 2021.
  59. Cueing others’ memories. Memory & cognition, 143(4):634–646, 2015.
  60. Construction et validation de l’échelle de motivation en éducation (eme) [construction and validation of the motivation toward education scale]. Canadian Journal of Behavioural Science / Revue canadienne des sciences du comportement, 21:323–349, 1989.
  61. W. W. Wilen. Questioning skills, for teachers. what research says to the teacher. 1991.
  62. Selecting better samples from pre-trained llms: A case study on question generation, 2022.
  63. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697–12706. PMLR, 2021.
Citations (80)

Summary

  • The paper demonstrates that GPT-3-generated cues can match manually produced prompts in fostering children's question-asking skills.
  • It employs a prompt-based method and field experiments with 75 participants to compare closed versus open cue designs.
  • Results reveal that open GPT-3 cues significantly improve divergent question-asking, indicating strong potential for AI in education.

Overview of GPT-3-driven Pedagogical Agents for Training Children's Curious Question-Asking Skills

The paper "GPT-3-driven pedagogical agents for training children's curious question-asking skills" explores the innovative use of NLP through LLMs in educational contexts. Specifically, it investigates the potential of using a pre-trained model like GPT-3 to automate the generation of semantic and linguistic cues that aid in training children's question-asking (QA) skills, fostering curiosity.

Core Research and Methodology

The researchers designed a paper that utilizes GPT-3 to generate content tailored for a divergent QA educational task. By employing a prompt-based technique, they direct GPT-3 to generate prompts that facilitate children's ability to ask curiosity-driven questions. The generated content's effectiveness was rigorously evaluated through human expert assessment and comparative analysis with manually created content.

This paper involved a field experiment with primary school students aged 9-10, involving 75 participants. The experiment included three distinct conditions:

  1. Manually generated "closed" cues guiding predefined questions.
  2. GPT-3-generated "closed" cues of the same nature.
  3. GPT-3-generated "open" cues prompting multiple potential questions.

Human experts annotated and evaluated the quality of the questions produced by children, examining the differences in QA performance across these groups.

Key Findings

A notable observation was the comparable QA performance between children engaged with manually generated cues and those with "closed" GPT-3-generated cues, indicating GPT-3's scalability in educational applications. Additionally, the paper found that "open" cues led to superior performance in children's divergent QA skills, presumably offering a more conducive environment for fostering curiosity.

Implications and Future Directions

The paper's findings underline the potential efficacy of LLMs like GPT-3 in educational contexts. The approach eases content creation burdens by facilitating the automatic generation of pedagogical prompts and cues, thereby saving time and resources in educational settings.

Practically, the results encourage the integration of AI-driven tools into educational technology, providing an adaptive and supportive learning experience that goes beyond traditional methods. Theoretical implications point to the importance of flexibility in educational prompts—suggesting that open-ended prompting could better stimulate cognitive processes associated with curiosity.

Future research could explore refining AI-generated content for educational use, addressing lingering challenges such as ensuring consistent semantic relevance and tailoring dial-effective feedback mechanisms directly from LLMs. Further studies may also seek to explore personalized learning trajectories enriched by AI to cater to individual student needs, ultimately enhancing the applicability and effectiveness of AI in educational technology.

Conclusion

This paper offers a sophisticated examination of employing GPT-3 within the field of education to foster curiosity and question-asking skills in children. The authors provide evidence supporting the utility and efficiency of AI-generated pedagogical content, marking a significant stride towards integrating LLMs into learning environments. As AI continues to evolve, exploration into its educational applications remains a promising avenue for advancing both academic methods and outcomes.

Youtube Logo Streamline Icon: https://streamlinehq.com