Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Imitation Game According To Turing (2501.17629v1)

Published 29 Jan 2025 in cs.HC, cs.AI, and cs.CY

Abstract: The current cycle of hype and anxiety concerning the benefits and risks to human society of Artificial Intelligence is fuelled, not only by the increasing use of generative AI and other AI tools by the general public, but also by claims made on behalf of such technology by popularizers and scientists. In particular, recent studies have claimed that LLMs can pass the Turing Test-a goal for AI since the 1950s-and therefore can "think". Large-scale impacts on society have been predicted as a result. Upon detailed examination, however, none of these studies has faithfully applied Turing's original instructions. Consequently, we conducted a rigorous Turing Test with GPT-4-Turbo that adhered closely to Turing's instructions for a three-player imitation game. We followed established scientific standards where Turing's instructions were ambiguous or missing. For example, we performed a Computer-Imitates-Human Game (CIHG) without constraining the time duration and conducted a Man-Imitates-Woman Game (MIWG) as a benchmark. All but one participant correctly identified the LLM, showing that one of today's most advanced LLMs is unable to pass a rigorous Turing Test. We conclude that recent extravagant claims for such models are unsupported, and do not warrant either optimism or concern about the social impact of thinking machines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sharon Temtsin (2 papers)
  2. Diane Proudfoot (1 paper)
  3. David Kaber (2 papers)
  4. Christoph Bartneck (4 papers)

Summary

An Analytical Assessment of Turing's Imitation Game with GPT-4-Turbo

The paper "The Imitation Game According To Turing" by Sharon Temtsin et al. revisits the famed Turing Test, aiming to deliver a stringent assessment following Turing's original instructions. This paper is particularly relevant in the backdrop of claims that LLMs, such as GPT-4-Turbo, have surpassed Turing's threshold for "thinking." Upon investigation, the paper challenges these assertions, providing a detailed examination through a rigorous implementation of the Turing Test.

Central to this research is the distinction between the three versions of the imitation game that Turing proposed, focusing on the three-player model as the most relevant to Turing's test. In the experiment, the authors engaged in both a Computer-Imitates-Human Game (CIHG) with GPT-4-Turbo and a Man-Imitates-Woman Game (MIWG), by meticulously adhering to Turing's guidelines. The paper conducted 37 trials of each game, concluding that GPT-4-Turbo failed to consistently deceive human interrogators. Specifically, 97% of the interrogators correctly identified GPT-4-Turbo as a computer, sharply contrasting with general claims of the model's human-like performance.

The authors argue for the necessity of a nuanced understanding of what Turing's test implies about machine intelligence, highlighting the disparity between superficial public narratives and academic rigor. Importantly, they posit that recent claims of LLMs passing the test stem from misinterpretations or incomplete implementations of Turing's original game. The key finding is that GPT-4-Turbo's performance remains markedly inferior in this context, refuting the idea that it can "think" in the manner Turing envisioned.

The paper's robust methodological approach included constraining anthropomorphism by employing a three-player game configuration. Moreover, by not imposing time limits, the paper addresses a common oversight in prior research, showing that previous fixed-duration tests potentially skew results by not allowing sufficient interaction time. This approach effectively rebuts claims that 5-minute benchmarks reflect reasonable testing periods, as the average game duration here extended to 14 minutes for CIHG.

The implications of this paper extend into both the practical and theoretical domains. Practically, it suggests recalibration in how we assess AI capabilities, discouraging reliance on overly simplistic benchmarks or claims of human equivalence for LLMs. Theoretically, the paper reinforces the ongoing complexity of defining and measuring machine "intelligence," asserting that current LLMs, despite their sophistication, still lack the requisite ability to pass Turing's true test convincingly.

Looking forward, this research implies a need for ongoing assessment frameworks that reflect genuine advancements in AI capabilities and adherence to stringent evaluation protocols. The authors advocate for future analyses to build on the guidelines and methodologies they have established, to refine comprehensions of the AI's potential impact on society. Such work is crucial in demystifying the progress of AI, alleviating public concerns, and more accurately communicating the state of machine intelligence.

HackerNews