Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Limitations of Language Models in Arithmetic and Symbolic Induction (2208.05051v1)

Published 9 Aug 2022 in cs.CL

Abstract: Recent work has shown that large pretrained LLMs (LMs) can not only perform remarkably well on a range of NLP tasks but also start improving on reasoning tasks such as arithmetic induction, symbolic manipulation, and commonsense reasoning with increasing size of models. However, it is still unclear what the underlying capabilities of these LMs are. Surprisingly, we find that these models have limitations on certain basic symbolic manipulation tasks such as copy, reverse, and addition. When the total number of symbols or repeating symbols increases, the model performance drops quickly. We investigate the potential causes behind this phenomenon and examine a set of possible methods, including explicit positional markers, fine-grained computation steps, and LMs with callable programs. Experimental results show that none of these techniques can solve the simplest addition induction problem completely. In the end, we introduce LMs with tutor, which demonstrates every single step of teaching. LMs with tutor is able to deliver 100% accuracy in situations of OOD and repeating symbols, shedding new insights on the boundary of large LMs in induction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jing Qian (81 papers)
  2. Hong Wang (254 papers)
  3. Zekun Li (73 papers)
  4. Shiyang Li (24 papers)
  5. Xifeng Yan (52 papers)
Citations (62)
X Twitter Logo Streamline Icon: https://streamlinehq.com