Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-trained Large Language Models Use Fourier Features to Compute Addition (2406.03445v1)

Published 5 Jun 2024 in cs.LG and cs.CL

Abstract: Pre-trained LLMs exhibit impressive mathematical reasoning capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This paper shows that pre-trained LLMs add numbers using Fourier features -- dimensions in the hidden state that represent numbers via a set of features sparse in the frequency domain. Within the model, MLP and attention layers use Fourier features in complementary ways: MLP layers primarily approximate the magnitude of the answer using low-frequency features, while attention layers primarily perform modular addition (e.g., computing whether the answer is even or odd) using high-frequency features. Pre-training is crucial for this mechanism: models trained from scratch to add numbers only exploit low-frequency features, leading to lower accuracy. Introducing pre-trained token embeddings to a randomly initialized model rescues its performance. Overall, our analysis demonstrates that appropriate pre-trained representations (e.g., Fourier features) can unlock the ability of Transformers to learn precise mechanisms for algorithmic tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tianyi Zhou (172 papers)
  2. Deqing Fu (14 papers)
  3. Vatsal Sharan (39 papers)
  4. Robin Jia (59 papers)
Citations (4)

Summary

  • The paper demonstrates that LLMs compute addition by leveraging low-frequency features for magnitude estimation and high-frequency features for modular classification.
  • The paper employs Fourier analysis and Logit Lens on fine-tuned GPT-2-XL models to uncover the distinct computational roles within MLP and attention layers.
  • The paper shows that filtering out key Fourier components drastically degrades performance, emphasizing the importance of pre-training for embedding effective inductive biases.

Pre-trained LLMs Use Fourier Features to Compute Addition

The paper "Pre-trained LLMs Use Fourier Features to Compute Addition," authored by Tianyi Zhou, Deqing Fu, Vatsal Sharan, and Robin Jia, explores the intricate mechanisms that allow pre-trained LLMs to perform arithmetic tasks, with a specific focus on addition. The paper reveals that these models utilize Fourier features embedded in their hidden states for arithmetic operations. This essay will provide an in-depth overview of the paper’s findings, the methodologies employed, and the implications for future AI research.

Overview of Insights and Methodologies

The principal discovery of this work is the employment of Fourier features by pre-trained LLMs to compute addition. These Fourier features are dimensions in the hidden states representing numbers through features sparse in the frequency domain. The paper identifies two primary ways these features are utilized within the model:

  1. Magnitude Approximation: MLP layers primarily employ low-frequency features to approximate the magnitude of the sum.
  2. Modular Classification: Attention layers perform modular addition through high-frequency features, such as determining the parity of the result.

To elucidate these findings, the authors conduct a Fourier analysis on the intermediate states of fine-tuned GPT-2-XL models. They employ techniques such as Logit Lens to extract and observe intermediate predictions at various model layers, allowing them to capture the progression of the model's computations.

Experimental Findings and Numerical Results

The experiments highlight the effectiveness of this computational approach. For example, models pre-trained for tasks, then fine-tuned for addition, show nearly perfect accuracy, progressively refining their predictions layer by layer. These models use different Fourier components to perform distinct sub-tasks: approximating the answer’s magnitude and determining its modulo classification, contributing to the final prediction. The Fourier basis decomposition reveals that low-frequency components dominate in the MLP outputs, while high-frequency components are prominent in the attention outputs.

A key experiment involves filtering out contributions from specific frequencies in the Fourier space. Removing low-frequency components from the MLP layers or high-frequency components from the attention layers substantially impairs the model's accuracy, corroborating the roles identified for these layers.

Impact of Pre-training

Pre-training is shown to be crucial for this sophisticated computational mechanism. Models trained from scratch exhibit significantly lower accuracy and lack evident Fourier features in their token embeddings and intermediate representations. Introducing pre-trained token embeddings to these models rescues their performance, underscoring the inductive biases acquired during pre-training that are pivotal for effective task-specific fine-tuning.

Broader Implications and Future Directions

The discovery that pre-trained LLMs utilize Fourier features for arithmetic tasks opens up new avenues for both theoretical research and practical applications. Theoretically, this insight adds depth to our understanding of how pre-trained models internalize patterns and perform computations. This mechanistic understanding can guide the design of future models focusing on integrating or enhancing these features to boost performance on arithmetic and other algorithmic tasks.

Practically, these findings could inform the development of more efficient and accurate AI systems for tasks involving numerical computations. By leveraging Fourier features, future models could become better equipped to handle a broader range of algorithmic problems, potentially extending their capabilities beyond the current state-of-the-art.

Furthermore, this research bridges the gap between pre-trained and task-specific behaviors, demonstrating that pre-training equips models with generalized capabilities that can be fine-tuned for specific complex tasks. This insight emphasizes the importance of pre-training regimes that effectively embed useful inductive biases into the models.

Conclusion

This paper provides a rigorous and methodically detailed analysis of how pre-trained LLMs compute addition using Fourier features. By dissecting the roles of different model components and emphasizing the necessity of pre-training, it offers valuable insights into the internal workings of LLMs. The implications of this paper are far-reaching, providing both a deeper theoretical understanding and practical guidance for future AI model development. As the field progresses, further research inspired by these findings is likely to uncover new capabilities and optimization strategies for AI systems, enhancing their utility across various domains.

Reddit Logo Streamline Icon: https://streamlinehq.com