Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
39 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
5 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Ask, and it shall be given: On the Turing completeness of prompting (2411.01992v3)

Published 4 Nov 2024 in cs.LG and cs.CC

Abstract: Since the success of GPT, LLMs have been revolutionizing machine learning and have initiated the so-called LLM prompting paradigm. In the era of LLMs, people train a single general-purpose LLM and provide the LLM with different prompts to perform different tasks. However, such empirical success largely lacks theoretical understanding. Here, we present the first theoretical study on the LLM prompting paradigm to the best of our knowledge. In this work, we show that prompting is in fact Turing-complete: there exists a finite-size Transformer such that for any computable function, there exists a corresponding prompt following which the Transformer computes the function. Furthermore, we show that even though we use only a single finite-size Transformer, it can still achieve nearly the same complexity bounds as that of the class of all unbounded-size Transformers. Overall, our result reveals that prompting can enable a single finite-size Transformer to be efficiently universal, which establishes a theoretical underpinning for prompt engineering in practice.

Citations (1)

Summary

  • The paper demonstrates that a finite-size Transformer can compute any Turing-computable function through effective prompt engineering.
  • It presents a detailed complexity analysis showing near-logarithmic overhead compared to traditional Turing machines.
  • The findings establish a theoretical framework for prompt engineering, guiding future research in LLM applications and optimization.

Turing Completeness of Prompting: Theoretical Foundations and Complexity Analysis

The paper "Ask, and It Shall Be Given: Turing Completeness of Prompting" by Ruizhong Qiu and colleagues from the University of Illinois Urbana-Champaign presents a comprehensive theory on the expressive capabilities of LLMs when leveraged through prompting. This work seeks to provide a theoretical underpinning for the empirical success observed in the LLM prompting paradigm, which involves using a single LLM to perform multiple tasks through the specification of different prompts.

Summary

The critical claim made by this paper is that prompting can achieve Turing completeness. It is demonstrated that a fixed-size Transformer can, given a suitable prompt, compute any function that is computable by a Turing machine. The Transformer, when properly prompted, is not only potentially universal in its computations but also achieves efficiencies comparable to other models that do not have size constraints. This finding is significant as it suggests that the versatile utility observed in practical applications of LLMs, such as zero-shot learning, has a robust theoretical foundation.

Contributions

This work makes several important contributions to the understanding of machine learning and computational complexity:

  • Expressive Power: The authors argue that a finite-size Transformer is theoretically capable of Turing-complete computation through an appropriate prompt setup. This establishes that all computable functions can be executed within this framework.
  • Complexity Analysis: The paper presents bounds on computational complexities that show the efficiency of prompt-based computation in terms of CoT steps and precision. Specifically, it maintains comparable complexity bounds even utilizing a single finite-size Transformer, thus aligning with the class of all unbounded-size Transformers.
  • Simulation Efficiency: By extending classic models like Turing machines into a new construct called two-tape Post–Turing machines (2-PTMs), the framework achieves a near logarithmic overhead compared to traditional Turing machines, which is a stride forward in understanding execution efficiencies.

Implications

The implications of proving Turing completeness through prompting extend both theoretically and practically:

  • Theoretical Framework: It provides a foundational theoretical framework for prompt engineering, suggesting that the variety of tasks LLMs can perform is underlain by strong computational principles. This could guide future research into the boundaries and capabilities of prompt-based computing.
  • Practical Applications: On a practical side, developers and researchers can have confidence that designing effective prompts can potentially unlock any computational task, given a sufficiently optimized Transformer model and understanding of prompt design.
  • Future Work: The paper indicates that while the expressive power is validated through this theoretical lens, questions remain about the learnability and adaptability of these constructs in practice, pointing towards future exploration focusing on whether and how a Transformer can be trained to simulate these complex models efficiently.

Conclusion

Through an overview of theoretical computer science and modern machine learning, the authors of this paper extend the boundaries of what is achievable with Transformers through prompting. The affirmation of Turing completeness not only bolsters the empirical use of LLMs but also provides a blueprint for potential advancements in both LLM design and applications. As the scope of machine learning continues to expand, this research offers a pivotal grounding point for future innovation in prompt engineering and beyond.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com