Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 129 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Reinforcement Learning on Pre-Training Data (RLPT)

Updated 25 September 2025
  • RLPT is a training paradigm that leverages reinforcement learning objectives on unlabeled data to autonomously improve reasoning and generalization in language models.
  • It employs autoregressive and middle segment reasoning methods to predict and validate text continuations using intrinsic, self-supervised rewards.
  • Empirical results demonstrate that RLPT yields scalable performance gains over supervised baselines across diverse benchmarks and reasoning tasks.

Reinforcement Learning on Pre-Training Data (RLPT) refers to the direct application of reinforcement learning (RL) objectives on unlabeled pre-training data, enabling generalizable reasoning and performance improvements in large-scale neural networks without the need for human-annotated feedback. This paradigm is motivated by the limitations in supervised scaling as the availability and quality of labeled text plateau, and it proposes to exploit the vast quantities of unsupervised pre-training data for scalable RL training. RLPT establishes a unified framework where models autonomously explore and optimize over the space of pre-training trajectories, deriving the reward signal directly from the intrinsic structure of the data—most commonly by rewarding accurate next-segment predictions conditioned on prior context. The resulting models exhibit enhanced reasoning ability, advanced generalization, and favorable scaling behaviors, providing a competitive foundation for further reinforcement learning with external objectives (Li et al., 23 Sep 2025).

1. Concept and Motivation

The RLPT paradigm is designed to address the diminishing returns of scaling LLMs purely via supervised next-token prediction, due to the finite growth rate of high-quality, human-annotated data. Rather than relying exclusively on RLHF (reinforcement learning from human feedback) or RLVR (reinforcement learning with verifiable rewards), both of which require constructed or curated reward signals, RLPT leverages the structure of the pre-training corpus to synthesize its own self-supervised reward. Specifically, RLPT frames learning as a next-segment reasoning task: for a given segment of context from a pre-training corpus, the policy aims to generate the most accurate continuation(s) and is rewarded according to a programmatic criterion (e.g., prefix matching with the ground truth). This approach facilitates the autonomous exploration of reasoning strategies across the full breadth of pre-training data, with verifiable rewards and without external annotation bottlenecks (Li et al., 23 Sep 2025).

2. Technical Formulation and Training Procedure

In the canonical RLPT setup, a raw text sample tt is divided into sequential segments [s1,...,sn][s_1, ..., s_n]. At each step ii, the model is trained to generate the next segment sis_i, given the preceding context s<is_{<i}. Two variants are implemented:

  • Autoregressive Segment Reasoning (ASR): Predict sis_i from s<is_{<i} (standard next-segment prediction).
  • Middle Segment Reasoning (MSR): Predict sis_i from s<is_{<i} and si+1s_{i+1}, requiring prediction within extended context.

For each prediction, a generative reward model derives a binary reward, based on whether the predicted segment, s^i\hat{s}_i, is a valid prefix of the ground-truth sis_i (using byte sequence or token-boundary prefix matching):

r(o,si)={1if  Grm(s^i,si)=1 0otherwiser(o, s_i) = \begin{cases} 1 & \text{if}\; G_\text{rm}(\hat{s}_i, s_i) = 1 \ 0 & \text{otherwise} \end{cases}

where oo is the model output and GrmG_\text{rm} is the reward function. The overall RLPT objective for model parameters θ\theta combines both reasoning objectives:

JSRPT(θ)=EASR[]+λEMSR[]J_{\text{SRPT}}(\theta) = \mathbb{E}_{\text{ASR}}[\ldots] + \lambda \cdot \mathbb{E}_{\text{MSR}}[\ldots]

with λ\lambda balancing the objectives. Training is performed with on-policy gradient methods (such as GRPO), using a mini-batch of samples and multiple rollouts per prompt, maximizing expected reward over possible continuations (Li et al., 23 Sep 2025).

3. Distinction from RLHF, RLVR, and Other Paradigms

A fundamental distinction of RLPT is its reward source—derived directly from natural textual structure—compared to RLHF (which depends on human preference annotation) or RLVR (which requires explicit, reference-based correctness). RLPT rewards the policy for accurate segment prediction, obviating the need for external annotation or evaluation and making it suitable for scaling across the entire pre-training data distribution. This facilitates autonomous and scalable training, encouraging models to generalize reasoning abilities across a broader contextual landscape. The RLPT objective also differs in its explicit use of reasoning traces and “rollouts,” moving beyond pure supervised sequence modeling toward a more flexible, exploratory policy learning paradigm (Li et al., 23 Sep 2025).

4. Empirical Results and Scaling Behavior

Large-scale experiments with RLPT, particularly on models such as Qwen3-4B-Base, demonstrate substantial and consistent improvements across general-domain and mathematical reasoning benchmarks. Representative gains include:

Model Dataset Supervised Baseline RLPT Score Absolute Gain
Qwen3-4B-Base MMLU 77.8 80.8 +3.0
Qwen3-4B-Base MMLU-Pro 59.7 64.8 +5.1
Qwen3-4B-Base GPQA-Diamond 31.3 39.4 +8.1
Qwen3-4B-Base KOR-Bench 50.7 56.7 +6.0
Qwen3-4B-Base AIME24 (math) - - +6.6 (Pass@1)
Qwen3-4B-Base AIME25 (math) - - +5.3 (Pass@1)

Scaling curves fit to a power-law relation with compute, showing that additional training resources yield continued performance improvements—a crucial property for data- and compute-intensive LLM training. These effects are reproduced across other model scales (Qwen3-8B-Base, Llama-3.2-3B-Base), underscoring RLPT's robustness and generality (Li et al., 23 Sep 2025).

5. Generalization and Reasoning Capability

The self-supervised next-segment reasoning structure of RLPT facilitates the emergence of latent reasoning skills. By encountering and being rewarded for correct—but not necessarily “memorized”—continuations, the model is incentivized to discover and utilize general, compositional reasoning strategies, improving its capacity for in-context understanding. Inclusion of both ASR and MSR objectives diversifies the set of reasoning paths explored during training, enabling the model to handle varied context lengths and types of reasoning queries. Empirically, this manifests as greater robustness to domain and task variation, as measured on diverse benchmarks, and as improved performance in follow-on tasks such as RLVR (reinforcement learning with verifiable rewards) (Li et al., 23 Sep 2025).

6. Connections, Limitations, and Future Directions

RLPT complements and extends self-supervised scaling by providing an RL-based mechanism that scales naturally with unlabeled data. The framework is also extensible: segmentation units need not be sentences—they could be atomic reasoning steps or subproblems inferred by the model, potentially yielding further performance gains. Refinements in the reward function (e.g., more nuanced prefix matching or alternative correctness criteria) may improve training stability and outputs. Potential future work includes combining RLPT with test-time scaling approaches (e.g., chain-of-thought prompting), domain-adaptive objectives, or hybrid reward functions. Open challenges include optimal design of segmentations, reward noise mitigation at scale, and rigorous evaluations of downstream generalization, especially in out-of-distribution tasks and under long-range context (Li et al., 23 Sep 2025).

7. Broader Impact and Research Implications

RLPT introduces a training-time scaling paradigm that enables reinforcement learning over pre-training data without human-annotated feedback. This fosters more autonomous, scalable, and generalizable LLMs and provides a solid foundation for continued RL optimization (e.g., RLVR). By exploiting the structure of unlabeled corpora, RLPT has the potential to extend the data-efficient frontier of LLMs beyond existing supervised and RLHF regimes. Its ability to autonomously encourage rich reasoning trajectories and robust generalization will be central as models are deployed in increasingly challenging reasoning, mathematical, and scientific settings (Li et al., 23 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Reinforcement Learning on Pre-Training Data (RLPT).