Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Middle School Students Question-Asking and Answer-Evaluation Skills When Using ChatGPT for Science Investigation (2505.01106v1)

Published 2 May 2025 in cs.CY

Abstract: Generative AI (GenAI) tools such as ChatGPT allow users, including school students without prior AI expertise, to explore and address a wide range of tasks. Surveys show that most students aged eleven and older already use these tools for school-related activities. However, little is known about how they actually use GenAI and how it impacts their learning. This study addresses this gap by examining middle school students ability to ask effective questions and critically evaluate ChatGPT responses, two essential skills for active learning and productive interactions with GenAI. 63 students aged 14 to 15 were tasked with solving science investigation problems using ChatGPT. We analyzed their interactions with the model, as well as their resulting learning outcomes. Findings show that students often over-relied on ChatGPT in both the question-asking and answer-evaluation phases. Many struggled to use clear questions aligned with task goals and had difficulty judging the quality of responses or knowing when to seek clarification. As a result, their learning performance remained moderate: their explanations of the scientific concepts tended to be vague, incomplete, or inaccurate, even after unrestricted use of ChatGPT. This pattern held even in domains where students reported strong prior knowledge. Furthermore, students self-reported understanding and use of ChatGPT were negatively associated with their ability to select effective questions and evaluate responses, suggesting misconceptions about the tool and its limitations. In contrast, higher metacognitive skills were positively linked to better QA-related skills. These findings underscore the need for educational interventions that promote AI literacy and foster question-asking strategies to support effective learning with GenAI.

Summary

Investigating Middle School Students' Question-Asking and Answer-Evaluation Skills When Using ChatGPT for Science Investigation

The paper "Investigating Middle School Students’ Question-Asking and Answer-Evaluation Skills When Using ChatGPT for Science Investigation" addresses a crucial gap in understanding how generative AI tools such as ChatGPT impact middle school students' learning processes. The authors, Rania Abdelghani, Kou Murayama, Celeste Kidd, Hélène Sauzéon, and Pierre-Yves Oudeyer, aim to delineate the ways in which these young learners use ChatGPT to formulate questions and evaluate responses within the context of scientific investigations.

The paper involved 63 French middle school students aged 14 to 15, who were tasked with solving science problems using ChatGPT. The primary focus was to assess two core competencies: the ability to pose effective questions and the capacity to critically evaluate the AI-generated responses. The paper revealed that the students often over-relied on ChatGPT, struggling particularly with crafting clear, goal-oriented questions. They also appeared challenged in evaluating the quality of responses, often accepting vague or incomplete answers without seeking clarification, leading to moderate learning outcomes.

Key Findings

  1. Question Formulation: Students demonstrated limited ability to formulate clear, context-specific questions. The paper used a d' sensitivity index, revealing mean sensitivity values that suggested less than optimal discernment between efficient and inefficient suggested questions.
  2. Response Evaluation: Students generally exhibited poor sensitivity to the quality of the responses, as indicated by their tendency to rate unsatisfactory answers highly. Their ability to discern the informative quality of ChatGPT's answers was also notably inadequate, which was further compounded by a low frequency of follow-up questions.
  3. Misconceptions and Misuse: There was a negative association between students' self-reported understanding and experience with ChatGPT and their ability to select and assess the quality of questions and answers, implying that superficial familiarity may foster misconceptions about the tool's limitations.
  4. Role of Metacognitive Skills: The paper found positive correlations between students' metacognitive skills and their QA-related abilities, suggesting that better metacognitive regulation enhances question-asking and answer-evaluation performance.

Implications

The implications of this research are multifaceted. Practically, it underscores the necessity for educational interventions that foster AI literacy among students. There is a significant need to educate learners on crafting precise prompts and critically assessing AI-driven responses to enhance cognitive engagement and learning outcomes. Theoretically, the findings present intriguing insights into the intersection of educational psychology and AI, highlighting the cognitive challenges posed by generative AI in pedagogical contexts.

Future Directions

Given the challenges identified, future research should focus on developing structured educational strategies to enhance students' AI literacy and metacognitive skills. Such strategies could include guided practice in formulating specific queries and critical thinking exercises aimed at evaluating informational quality from AI tools. Furthermore, more extensive studies with diverse demographics would be essential to generalize findings and strengthen the understanding of AI's role in educational settings.

In conclusion, this paper presents significant evidence of the nuanced difficulties students face when leveraging generative AI for educational purposes. As these technologies become increasingly prevalent, it is imperative to equip learners with the necessary skills to utilize AI effectively and critically, thus maximizing its educational potential while minimizing dependency and passive learning behaviors.

X Twitter Logo Streamline Icon: https://streamlinehq.com