Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How well do Large Language Models perform in Arithmetic tasks? (2304.02015v1)

Published 16 Mar 2023 in cs.CL and cs.AI

Abstract: LLMs have emerged abilities including chain-of-thought to answer math word problems step by step. Solving math word problems not only requires abilities to disassemble problems via chain-of-thought but also needs to calculate arithmetic expressions correctly for each step. To the best of our knowledge, there is no work to focus on evaluating the arithmetic ability of LLMs. In this work, we propose an arithmetic dataset MATH 401 to test the latest LLMs including GPT-4, ChatGPT, InstrctGPT, Galactica, and LLaMA with various arithmetic expressions and provide a detailed analysis of the ability of LLMs. MATH 401 and evaluation codes are released at \url{https://github.com/GanjinZero/math401-LLM}.

Performance of LLMs in Arithmetic Tasks: An Analysis

The paper "How well do LLMs perform in Arithmetic tasks?" by Zheng Yuan et al., examines the arithmetic capabilities of various state-of-the-art LLMs. Recognizing that solving arithmetic problems is critical for successfully answering math word problems, the researchers introduce a comprehensive dataset named MATH 401. This dataset evaluates models across a range of arithmetic operators and numeric types, shedding light on LLMs' numerical computation abilities.

The researchers evaluate prominent LLMs, including GPT-4, ChatGPT, InstructGPT, Galactica, and LLaMA, using MATH 401, which presents arithmetic challenges involving basic operations like addition and subtraction, as well as more complex tasks such as exponentiation, trigonometric functions, and logarithms. GPT-4 and ChatGPT achieved superior results, outperforming others with substantial margins in accuracy across these arithmetic tasks.

Dataset and Evaluation

The dataset underscores varying difficulty levels, from simple operations like addition of small integers to complex calculations involving irrational numbers and logarithmic functions. Accuracy was determined by comparing the models' outputs to target solutions within a particular tolerance level.

The GPT-4 model notably excelled, achieving the highest scores across all groups, demonstrating a well-rounded proficiency in handling a diverse range of arithmetic expressions. ChatGPT, following closely, demonstrated significant capabilities too, particularly in tasks like long arithmetic expressions and computations involving irrational numbers, although it faced challenges with large number multiplications and specific functions.

Key Findings

Results revealed several factors that impact LLM arithmetic performance:

  1. Tokenization: It was observed that digit-level tokenization employed by Galactica and LLaMA models contributed to their arithmetic performance. This approach redistributes token frequency, which could be beneficial for arithmetic understanding.
  2. Training Corpus: A diverse pre-training corpus enriched with code and mathematical data (e.g., LATEX sources) significantly boosts arithmetic skills. Galactica's success in arithmetic can be partially attributed to its extensive LATEX pre-training, while code-specific models like Code-davinci-002 showed moderate arithmetic prowess.
  3. Instruction Fine-tuning and RLHF: Fine-tuning with an instructional dataset improves performance. For example, RLHF-enhanced models such as text-davinci-003 outperformed their counterparts.
  4. Prompting Strategy: Using structured prompts was critical. Specific prompts significantly enhanced model outputs, underscoring the importance of careful prompt engineering.

Scaling and Model Size

Analysis of different model sizes highlighted that an increase in parameter count generally correlates with improved arithmetic abilities, but with diminishing returns beyond a certain threshold (around 30 billion parameters). This suggests that scaling alone may be insufficient for substantial arithmetic improvements beyond this scale, particularly when considering ChatGPT's impressive capabilities despite unknown parameter details.

Conclusion and Implications

The paper provides substantial insights into the arithmetic capabilities of LLMs, a foundational aspect for solving more intricate math problems. This evaluation identifies areas where LLMs excel and where they struggle, guiding future research to enhance these models further.

Future research should explore additional mathematical domains like calculus and algebra, leveraging the understanding gleaned from arithmetic evaluations. The integration of arithmetic skills with symbolic reasoning could enhance LLMs' applicability in more sophisticated scientific and mathematical contexts. There is a need for continued exploration of prompt engineering techniques and instructional data to bolster LLMs' operational efficiency across various domains. This paper sets a foundational benchmark for assessing LLMs' numerical skills, shaping ongoing research in LLM development and fine-tuning methodologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zheng Yuan (117 papers)
  2. Hongyi Yuan (23 papers)
  3. Chuanqi Tan (56 papers)
  4. Wei Wang (1793 papers)
  5. Songfang Huang (51 papers)
Citations (99)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets