Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pretrained Language Models are Symbolic Mathematics Solvers too! (2110.03501v3)

Published 7 Oct 2021 in stat.ML, cs.CL, and cs.LG

Abstract: Solving symbolic mathematics has always been of in the arena of human ingenuity that needs compositional reasoning and recurrence. However, recent studies have shown that large-scale LLMs such as transformers are universal and surprisingly can be trained as a sequence-to-sequence task to solve complex mathematical equations. These large transformer models need humongous amounts of training data to generalize to unseen symbolic mathematics problems. In this paper, we present a sample efficient way of solving the symbolic tasks by first pretraining the transformer model with language translation and then fine-tuning the pretrained transformer model to solve the downstream task of symbolic mathematics. We achieve comparable accuracy on the integration task with our pretrained model while using around $1.5$ orders of magnitude less number of training samples with respect to the state-of-the-art deep learning for symbolic mathematics. The test accuracy on differential equation tasks is considerably lower comparing with integration as they need higher order recursions that are not present in language translations. We propose the generalizability of our pretrained LLM from Anna Karenina Principle (AKP). We pretrain our model with different pairs of language translations. Our results show language bias in solving symbolic mathematics tasks. Finally, we study the robustness of the fine-tuned model on symbolic math tasks against distribution shift, and our approach generalizes better in distribution shift scenarios for the function integration.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kimia Noorbakhsh (6 papers)
  2. Modar Sulaiman (5 papers)
  3. Mahdi Sharifi (3 papers)
  4. Kallol Roy (9 papers)
  5. Pooyan Jamshidi (53 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.