Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Language Models for Mathematics through Interactions (2306.01694v2)

Published 2 Jun 2023 in cs.LG and cs.HC

Abstract: There is much excitement about the opportunity to harness the power of LLMs when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs, and is insufficient for making an informed decision about which LLMs and under which assistive settings can they be sensibly used. Static assessment fails to account for the essential interactive element in LLM deployment, and therefore limits how we understand LLM capabilities. We introduce CheckMate, an adaptable prototype platform for humans to interact with and evaluate LLMs. We conduct a study with CheckMate to evaluate three LLMs (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics, with a mixed cohort of participants from undergraduate students to professors of mathematics. We release the resulting interaction and rating dataset, MathConverse. By analysing MathConverse, we derive a taxonomy of human behaviours and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness in LLM generations, amongst other findings. Further, we garner a more granular understanding of GPT-4 mathematical problem-solving through a series of case studies, contributed by expert mathematicians. We conclude with actionable takeaways for ML practitioners and mathematicians: models that communicate uncertainty respond well to user corrections, and are more interpretable and concise may constitute better assistants. Interactive evaluation is a promising way to navigate the capability of these models; humans should be aware of LLMs' algebraic fallibility and discern where they are appropriate to use.

Evaluating Interactive LLM Performance in Mathematics

CheckMate: Interactive Evaluation Platform

The standard approach to evaluating LLMs for mathematical problem-solving commonly depends on static correctness of model outputs against a fixed data set. However, this disconnected evaluation misrepresents the intrinsic interactivity of applying LLMs as mathematical assistants. This paper introduces CheckMate, a prototype platform that facilitates dynamic human-LLM interactions, focusing on undergraduate mathematics problem-solving. CheckMate implements both structured multi-turn evaluations across models and instance-based evaluations involving domain experts. Enhancing LLM’s interpretative and generative functionality, the researchers commend CheckMate's adaptability for navigating LLM capabilities in response to user interactions.

Insights from Structured Evaluation

CheckMate was deployed to generate insights into how participants, ranging from undergraduate students to mathematics professors, leverage LLMs. The subsequent dataset, MathConverse, comprised of 261 human-model interaction pairs, indicates that ChatGPT and GPT-4, optimized for conversational interactions, outperform traditional models like InstructGPT, based on user preference and evaluatory metrics. Participants independently rated interactions by correctness and perceived helpfulness, with the aggregate analysis asserting ChatGPT and GPT-4's utility. Moreover, through MathConverse, a preliminary taxonomy on user interaction behaviors was derived, revealing patterns like definition-seeking and correction attempts during solution discussions.

Findings from Expert Case Studies

Complementing CheckMate’s structured interactions, expert case studies facilitated by mathematicians offer nuanced understanding of model behavior. Despite GPT-4 outshining others in standard evaluations, deeper investigations unveil its limitations, including challenges around algebraic manipulations and engagement with users trying to apply corrections. The experts spotlight the model’s tendency to default to pattern-matching rather than strategic reasoning, reflecting memorization artifacts over conceptual understanding. These case studies emphasize the importance of human scrutiny for error detection, given the subtlety of mistakes that could otherwise go unnoticed.

Considerations and Recommendations for Model Deployment

The researchers elucidate broader implications for deploying LLMs in mathematical contexts from their interactive assessments. While the models demonstrate some capacity to aid in problem-solving, users must diligently verify model outputs, particularly in algebra-related tasks. Experts advise caution in over-reliance upon models and suggest employing models for tasks like definitions retrieval, where they perform reliably. The paper encourages developers to focus on models that can communicate uncertainty, interpret user corrections, and deliver concise responses. The immersive exploration through CheckMate and the expert case studies offer crucial grounding for future LLM evaluations and their practical deployment in mathematical collaborations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Katherine M. Collins (32 papers)
  2. Albert Q. Jiang (12 papers)
  3. Simon Frieder (11 papers)
  4. Lionel Wong (16 papers)
  5. Miri Zilka (9 papers)
  6. Umang Bhatt (42 papers)
  7. Thomas Lukasiewicz (125 papers)
  8. Yuhuai Wu (49 papers)
  9. Joshua B. Tenenbaum (257 papers)
  10. William Hart (6 papers)
  11. Timothy Gowers (2 papers)
  12. Wenda Li (34 papers)
  13. Adrian Weller (150 papers)
  14. Mateja Jamnik (57 papers)
Citations (36)