Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
36 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
37 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction (2504.15266v2)

Published 21 Apr 2025 in cs.LG, cs.AI, and cs.CL

Abstract: We design a suite of minimal algorithmic tasks that are a loose abstraction of open-ended real-world tasks. This allows us to cleanly and controllably quantify the creative limits of the present-day LLM. Much like real-world tasks that require a creative, far-sighted leap of thought, our tasks require an implicit, open-ended stochastic planning step that either (a) discovers new connections in an abstract knowledge graph (like in wordplay, drawing analogies, or research) or (b) constructs new patterns (like in designing math problems or new proteins). In these tasks, we empirically and conceptually argue how next-token learning is myopic and memorizes excessively; multi-token approaches, namely teacherless training and diffusion models, comparatively excel in producing diverse and original output. Secondly, to elicit randomness without hurting coherence, we find that injecting noise at the input layer (dubbed as seed-conditioning) works surprisingly as well as (and in some conditions, better than) temperature sampling from the output layer. Thus, our work offers a principled, minimal test-bed for analyzing open-ended creative skills, and offers new arguments for going beyond next-token learning and temperature sampling. We make part of the code available under https://github.com/chenwu98/algorithmic-creativity

Summary

  • The paper reveals that multi-token approaches significantly outperform next-token prediction in generating creative outputs while reducing reliance on memorization.
  • It introduces combinational and exploratory creativity tasks to systematically test language models' abilities in pattern recognition and planning.
  • The study shows that injecting noise at the input layer yields more coherent and diverse creative outcomes compared to traditional output-layer techniques.

Overview of the Creative Limits of Next-Token Prediction Models

This paper presents a paper aimed at understanding and quantifying the creative limits inherent in current LLMs, particularly those employing next-token prediction algorithms. To investigate these limitations, the authors design a set of minimal algorithmic tasks that abstractly represent real-world creative and open-ended challenges. These algorithmic tasks are framed to test the hypothesis that current LLMs are overly myopic, focusing excessively on memorization rather than innovative thinking or knowledge synthesis.

The research introduces two primary categories of tasks inspired by cognitive science's classification of creativity: combinational creativity and exploratory creativity. Combinational creativity involves tasks that demand generating novel connections within a pre-defined knowledge graph, akin to generating wordplay or analogies. Exploratory creativity tasks, on the other hand, involve constructing new patterns conforming to specific rules, similar to designing new problems, narratives, or molecular structures.

Algorithmic Task Design and Methodology

  1. Combinational Creativity Tasks:
    • Sibling Discovery: Requires models to recall and plan connections between “sibling” nodes within an implicit bipartite graph stored in the model’s weights.
    • Triangle Discovery: Demands models to identify triangles within a graph, necessitating a more complex pattern recognition and planning process.
  2. Exploratory Creativity Tasks:
    • Circle Construction: Encompasses tasks where models construct adjacency lists that can be arranged into circle graphs, emphasizing innovative pattern arrangement.
    • Line Construction: Similar to circle construction but with a linear order constraint, challenging models to use systematic planning.

The authors explore two main constraints of the next-token prediction paradigm: its inefficient handling of open-ended and stochastic tasks, and its reliance on output-layer noise injection (such as temperature sampling) for generating diverse outputs.

Key Findings and Implications

The paper finds substantial differences in performance between next-token and multi-token approaches across the defined tasks:

  • Multi-Token vs. Next-Token Learning: Models trained using multi-token approaches, such as teacherless training and diffusion models, significantly outperform next-token models in terms of algorithmic creativity. These methods allow for better handling of the required implicit and stochastic planning, reducing excessive memorization and promoting genuine creative outputs.
  • Randomness Injection: The research demonstrates that injecting noise at the input layer (hash-conditioning) yields richer creative outputs as compared to traditional methods like temperature sampling. This finding suggests that planning randomness before output generation is more effective than doing so at the end of the sequence, as it avoids complexities in marginalizing over multiple outcomes under the next-token framework.

Future Research and Developments

The findings suggest promising directions for future AI research and applications:

  1. Multi-Token Prediction Technologies: Expanding the role of multi-token models in complex language processing could enhance the creative abilities of AI systems, enabling them to excel in tasks requiring deep knowledge synthesis and pattern creation.
  2. Enhancements to Input Conditioning: Techniques like hash-conditioning provide new avenues to explore and improve diversity without compromising on coherence, potentially applying these approaches to enhance creativity in broader AI applications.

The paper outlines a fundamental challenge for future LLMs: to move beyond next-token prediction and embrace techniques that promote diversity and originality without sacrificing accuracy. This transition could underpin significant advancements in AI's capability to tackle open-ended, real-world problems that demand creativity, such as scientific exploration and artistic creation.

In conclusion, this work not only provides critical insights into the limitations of existing LLMs but also highlights innovative strategies to overcome these challenges, paving the way for the development of truly creative AI systems.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com