Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NumGPT: Improving Numeracy Ability of Generative Pre-trained Models (2109.03137v2)

Published 7 Sep 2021 in cs.CL and cs.LG

Abstract: Existing generative pre-trained LLMs (e.g., GPT) focus on modeling the language structure and semantics of general texts. However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e.g., math word problems and measurement estimation). In this paper, we propose NumGPT, a generative pre-trained model that explicitly models the numerical properties of numbers in texts. Specifically, it leverages a prototype-based numeral embedding to encode the mantissa of the number and an individual embedding to encode the exponent of the number. A numeral-aware loss function is designed to integrate numerals into the pre-training objective of NumGPT. We conduct extensive experiments on four different datasets to evaluate the numeracy ability of NumGPT. The experiment results show that NumGPT outperforms baseline models (e.g., GPT and GPT with DICE) on a range of numerical reasoning tasks such as measurement estimation, number comparison, math word problems, and magnitude classification. Ablation studies are also conducted to evaluate the impact of pre-training and model hyperparameters on the performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhihua Jin (13 papers)
  2. Xin Jiang (243 papers)
  3. Xingbo Wang (33 papers)
  4. Qun Liu (231 papers)
  5. Yong Wang (498 papers)
  6. Xiaozhe Ren (21 papers)
  7. Huamin Qu (141 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.