Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Positional Description Matters for Transformers Arithmetic (2311.14737v1)

Published 22 Nov 2023 in cs.CL, cs.AI, and cs.LG
Positional Description Matters for Transformers Arithmetic

Abstract: Transformers, central to the successes in modern Natural Language Processing, often falter on arithmetic tasks despite their vast capabilities --which paradoxically include remarkable coding abilities. We observe that a crucial challenge is their naive reliance on positional information to solve arithmetic problems with a small number of digits, leading to poor performance on larger numbers. Herein, we delve deeper into the role of positional encoding, and propose several ways to fix the issue, either by modifying the positional encoding directly, or by modifying the representation of the arithmetic task to leverage standard positional encoding differently. We investigate the value of these modifications for three tasks: (i) classical multiplication, (ii) length extrapolation in addition, and (iii) addition in natural language context. For (i) we train a small model on a small dataset (100M parameters and 300k samples) with remarkable aptitude in (direct, no scratchpad) 15 digits multiplication and essentially perfect up to 12 digits, while usual training in this context would give a model failing at 4 digits multiplication. In the experiments on addition, we use a mere 120k samples to demonstrate: for (ii) extrapolation from 10 digits to testing on 12 digits numbers while usual training would have no extrapolation, and for (iii) almost perfect accuracy up to 5 digits while usual training would be correct only up to 3 digits (which is essentially memorization with a training set of 120k samples).

The central focus of "Positional Description Matters for Transformers Arithmetic" (Shen et al., 2023 ) is the impact of positional encoding on the performance of Transformers in arithmetic tasks. This paper identifies that standard positional encodings significantly limit the ability of Transformer models to handle arithmetic problems as the number of digits increases.

Key Points on Positional Encoding in Arithmetic Tasks

  1. Positional Challenge: Transformers struggle with arithmetic tasks, particularly with large numbers, due to their naive reliance on positional encoding. Standard training approaches fail to generalize beyond a small number of digits, such as 4-digit multiplication, whereas optimal modifications can enable accurate multiplication of up to 12-15 digits.
  2. Proposed Modifications:

The paper suggests two potential solutions: - Modifying Positional Encoding: Adjusting how positional information is integrated into the model. - Altered Task Representation: Redefining how arithmetic tasks are encoded to exploit the positional encoding more effectively, for example, through different surface forms or intermediate step encoding.

  1. Experimental Results:
    • Multiplication: With minimal data, a small model achieved remarkable accuracy in 15-digit multiplication, while traditional methods only managed 4 digits.
    • Addition Tasks: Experiments involving digit length extrapolation showed significant improvements, demonstrating the model's capability to generalize to unseen digit lengths.

Additional Insights from Related Works

  • Rotary Position Embedding (RoPE) (Su et al., 2021 ): Introduces a rotation matrix-based method for positional encoding, showing enhanced performance in long text classification. Though not explicitly arithmetic-focused, it offers a method to handle positional information flexibly, which might benefit arithmetic tasks indirectly.
  • Conditional Positional Encoding (CPE) (Chu et al., 2021 ): Dynamically generates encodings based on input neighborhood, improving generalization and translation invariance. This adaptability might help Transformers tackle larger arithmetic problems by providing a more context-aware positional understanding.
  • Surface Form Representation (Nogueira et al., 2021 ): Demonstrates the influence of number representation on arithmetic task performance, highlighting that different encodings (position tokens) aid in learning addition and subtraction tasks efficiently.
  • Length Generalization Challenges (Kazemnejad et al., 2023 , Lee et al., 2023 ): Studies on length generalization show that typical positional encodings like ALiBi, Rotary, and Absolute Position Embedding (APE) are not well-suited for longer sequences. They suggest that alternative encoding methodologies or even no explicit positional encoding might offer better results for arithmetic extrapolation tasks.

Conclusion

Positional encoding is crucial for the effective performance of Transformers on arithmetic tasks. The paper "Positional Description Matters for Transformers Arithmetic" highlights significant improvements by adjusting positional encodings and task representation. Insights from related research suggest various enhancements to traditional positional encoding methods that could further help Transformers generalize arithmetic operations over larger numbers.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
  2. Charton, F. (2021). Linear algebra with transformers. arXiv preprint arXiv:2112.01898.
  3. Charton, F. (2022). What is my math transformer doing?–three results on interpretability and generalization. arXiv preprint arXiv:2211.00170.
  4. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
  5. The neural data router: Adaptive control flow in transformers improves systematic generalization. arXiv preprint arXiv:2110.07732.
  6. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654.
  7. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. arXiv preprint arXiv:2305.00586.
  8. Length generalization in arithmetic transformers. arXiv preprint arXiv:2306.15400.
  9. The impact of positional encoding on length generalization in transformers. arXiv preprint arXiv:2305.19466.
  10. Shape: Shifted absolute position embedding for transformers. arXiv preprint arXiv:2109.05644.
  11. Teaching arithmetic to small transformers. arXiv preprint arXiv:2307.03381.
  12. Systematic generalization and emergent structures in transformers trained on structured tasks. arXiv preprint arXiv:2210.00400.
  13. Let’s verify step by step. arXiv preprint arXiv:2305.20050.
  14. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019.
  15. OpenAI (2023). Gpt-4 technical report. arXiv preprint arXiv:2309.05463.
  16. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.
  17. Limitations of language models in arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051.
  18. Randomized positional encodings boost length generalization of transformers. arXiv preprint arXiv:2305.16843.
  19. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.
  20. Testolin, A. (2023). Can neural networks do arithmetic? a survey on the elementary numerical skills of state-of-the-art deep learning models. arXiv preprint arXiv:2303.07735.
  21. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275.
  22. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  23. Gpt can solve mathematical problems without a calculator. arXiv preprint arXiv:2309.03241.
  24. Unveiling transformers with lego: a synthetic reasoning task. arXiv preprint arXiv:2206.04301.
  25. Teaching algorithmic reasoning via in-context learning. arXiv preprint arXiv:2211.09066.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ruoqi Shen (18 papers)
  2. Sébastien Bubeck (90 papers)
  3. Ronen Eldan (60 papers)
  4. Yin Tat Lee (102 papers)
  5. Yuanzhi Li (119 papers)
  6. Yi Zhang (994 papers)
Citations (30)