Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 160 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 417 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Generative AI Takes a Statistics Exam: A Comparison of Performance between ChatGPT3.5, ChatGPT4, and ChatGPT4o-mini (2501.09171v1)

Published 15 Jan 2025 in stat.OT and cs.LG

Abstract: Many believe that use of generative AI as a private tutor has the potential to shrink access and achievement gaps between students and schools with abundant resources versus those with fewer resources. Shrinking the gap is possible only if paid and free versions of the platforms perform with the same accuracy. In this experiment, we investigate the performance of GPT versions 3.5, 4.0, and 4o-mini on the same 16-question statistics exam given to a class of first-year graduate students. While we do not advocate using any generative AI platform to complete an exam, the use of exam questions allows us to explore aspects of ChatGPT's responses to typical questions that students might encounter in a statistics course. Results on accuracy indicate that GPT 3.5 would fail the exam, GPT4 would perform well, and GPT4o-mini would perform somewhere in between. While we acknowledge the existence of other Generative AI/LLMs, our discussion concerns only ChatGPT because it is the most widely used platform on college campuses at this time. We further investigate differences among the AI platforms in the answers for each problem using methods developed for text analytics, such as reading level evaluation and topic modeling. Results indicate that GPT3.5 and 4o-mini have characteristics that are more similar than either of them have with GPT4.

Summary

  • The paper reveals significant performance differences among ChatGPT variants on a graduate-level statistics exam.
  • GPT4 outperforms GPT3.5 by accurately interpreting visual data, while GPT4o-mini shows moderate improvements.
  • Results underscore concerns over educational disparities due to differential access to premium AI tools.

Analyzing the Efficacy of ChatGPT Variants in a Statistics Exam Context

The paper "Generative AI Takes a Statistics Exam: A Comparison of Performance between ChatGPT3.5, ChatGPT4, and ChatGPT4o-mini" by Monnie McGee and Bivin Sadler offers a rigorous analysis of the differential performance of various generative AI models—specifically ChatGPT3.5, ChatGPT4, and the newly introduced ChatGPT4o-mini—in solving a 16-question statistics exam for first-year graduate students. This paper is centered on addressing the potential disparities in educational quality that may arise from the free versus subscription-based access to AI platforms widely used in educational environments.

Key Findings

The paper demonstrates that the performance of these AI models varies significantly depending on the version. GPT3.5 performed inadequately, scoring below passing level on the exam. In stark contrast, GPT4 achieved a high level of accuracy, surpassing GPT3.5 by a considerable margin. GPT4o-mini, which is now available on the free tier, performed moderately, occupying an intermediary position between GPT3.5 and GPT4.

This performance discrepancy was particularly evident in the context of interpreting visual data representations such as plots and tables. GPT4 was shown to process these efficiently, while GPT3.5 exhibited a marked inability to comprehend non-textual data due to its text-only capabilities. Despite some advancements, GPT4o-mini struggled with the same tasks, albeit with more sophistication than GPT3.5. The paper underscores the improvement in understanding and explanation of statistical concepts in more advanced AI models, with GPT4 demonstrating a more nuanced comprehension of statistical methodologies.

Text Analysis Insights

Beyond accuracy, the researchers employed text analytics to evaluate the readability and thematic consistency of outputs from these AI models. The analysis revealed that the complexity of language used by GPT4o-mini was higher than that of its counterparts, aligning with a graduate-level reading proficiency, which could be both a feature and a limitation depending on the user's background. Word frequency analysis further illustrated GPT4o-mini’s inclination towards detailed explanation, a trait less pronounced in the more concise responses typical of GPT4.

In terms of thematic content, topic modeling distinguished substantial differences among the AI versions. GPT3.5 exhibited a propensity to digress into domain-specific narratives rather than focusing strictly on statistical tasks, a tendency not observed in GPT4 or GPT4o-mini. This suggests a progression towards more statistically relevant and focused content in recent iterations of ChatGPT, signifying an evolution in their ability to serve academic needs more effectively.

Implications and Future Directions

The implications of this research are manifold, particularly in the pedagogical context. The pronounced variance in AI model performance highlights a critical concern: the potential exacerbation of educational disparities driven by access to premium AI services. This concern prompts a dialogue on policy-driven initiatives to ensure equitable access to high-performance AI tools, which might include institutional AI licensing arrangements or integrated learning systems within classrooms.

Moreover, this paper opens avenues for further exploration of how generative AI can be optimally harnessed in educational settings, particularly regarding their role as virtual tutors. Understanding the impact of prompt engineering on AI responses, as well as analyzing how AI can dynamically tailor its assistance to individual learners' needs, remains an open field for future empirical research.

As the capabilities of AI continue to evolve, this work provides foundational insights that underscore the necessity for continuous performance and equity assessments across educational technology landscapes. These assessments will be crucial to ensuring that AI can fulfill its intended role as a democratizing force in education without inadvertently reinforcing systemic inequities.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.