Evaluation of mathematical questioning strategies using data collected through weak supervision (2112.00985v1)
Abstract: A large body of research demonstrates how teachers' questioning strategies can improve student learning outcomes. However, developing new scenarios is challenging because of the lack of training data for a specific scenario and the costs associated with labeling. This paper presents a high-fidelity, AI-based classroom simulator to help teachers rehearse research-based mathematical questioning skills. Using a human-in-the-loop approach, we collected a high-quality training dataset for a mathematical questioning scenario. Using recent advances in uncertainty quantification, we evaluated our conversational agent for usability and analyzed the practicality of incorporating a human-in-the-loop approach for data collection and system evaluation for a mathematical questioning scenario.
- Debajyoti Datta (12 papers)
- Maria Phillips (4 papers)
- James P Bywater (3 papers)
- Jennifer Chiu (3 papers)
- Ginger S. Watson (3 papers)
- Laura E. Barnes (28 papers)
- Donald E Brown (2 papers)