Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 156 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 109 tok/s Pro
Kimi K2 168 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Stratified Advantage Normalization (SAN)

Updated 8 October 2025
  • Stratified Advantage Normalization (SAN) is a technique that partitions trajectories into homogeneous groups, enabling unbiased, locally normalized advantage estimation.
  • By computing statistics within each stratum, SAN ensures that policy gradient updates avoid cross-stratum bias and yield stable, scale-consistent learning signals.
  • Empirical evaluations show that SAN enhances training reward and convergence stability, underscoring its practical impact in heterogeneous RL environments.

Stratified Advantage Normalization (SAN) is a technique developed to address the statistical and optimization challenges arising from structural heterogeneity in reinforcement learning (RL), particularly in settings such as LLM search agents where agent-generated trajectories vary dramatically in their structure, reward distributions, and operational complexity. SAN ensures that credit assignment and normalization for policy gradient updates are performed within homogeneous strata of trajectories, thereby eliminating systematic bias that arises when using global baselines over fundamentally incomparable samples. The method has been mathematically analyzed and empirically validated as central to the Stratified GRPO algorithm, establishing stratification as a principled solution for RL in structurally heterogeneous environments (Zhu et al., 7 Oct 2025).

1. Motivation and Definition

Stratified Advantage Normalization is designed to solve the problem of cross-stratum bias—deterministic offsets resulting from direct comparison of heterogeneous trajectories in policy optimization. Standard advantage normalization methods, which compute baseline and scaling statistics globally across all trajectories, inadvertently perform "apples-to-oranges" comparisons when the population of trajectories is structurally diverse (e.g., differing in search count, branching factor, or action outcomes).

SAN partitions a batch of trajectories B\mathcal{B} into disjoint strata {Bk}\{\mathcal{B}_k\}, each defined by a shared structural property (e.g., the same number of search engine calls). Within each stratum, advantages are normalized according to the local empirical mean and standard deviation. This yields:

ASAN(τ)=R(τ)μ^k(x)σ^k(x)+ϵA_{\text{SAN}}(\tau) = \frac{R(\tau) - \hat{\mu}_k(x)}{\hat{\sigma}_k(x) + \epsilon}

where ASAN(τ)A_{\text{SAN}}(\tau) is the normalized advantage for trajectory τ\tau in stratum kk, R(τ)R(\tau) is its reward, μ^k(x)\hat{\mu}_k(x) and σ^k(x)\hat{\sigma}_k(x) are the mean and standard deviation for group kk (possibly further conditioned on external variables such as prompt xx), and ϵ\epsilon is a small constant for numerical stability.

2. Mechanism and Statistical Properties

The SAN mechanism involves three key steps: (a) stratum assignment of each trajectory based on a discrete structural property, (b) computation of local stratum statistics, and (c) normalization of each sample's advantage within its stratum.

For trajectory τBk\tau \in \mathcal{B}_k (where kk is the stratum index), the SAN-normalized advantage is directly centered and scaled within its group.

Stratum Index (kk) Empirical Mean (μ^k(x)\hat{\mu}_k(x)) Empirical Std. Dev. (σ^k(x)\hat{\sigma}_k(x))
1 mean of R(τ)R(\tau) in B1\mathcal{B}_1 std of R(τ)R(\tau) in B1\mathcal{B}_1
2 mean of R(τ)R(\tau) in B2\mathcal{B}_2 std of R(τ)R(\tau) in B2\mathcal{B}_2
... ... ...

Mathematically, SAN guarantees:

E[ASANk]=0andVar[ASANk]=1\mathbb{E}[A_{\text{SAN}} | k] = 0 \qquad \text{and} \qquad \mathrm{Var}[A_{\text{SAN}} | k] = 1

for any stratum kk, ensuring unbiasedness and unit variance locally. These conditional properties are not assured by global normalization, which yields:

E[AGNk]=μkμσ,Var[AGNk]=σk2σ2\mathbb{E}[A_{\text{GN}}|k] = \frac{\mu_k - \mu}{\sigma}, \qquad \mathrm{Var}[A_{\text{GN}}|k] = \frac{\sigma_k^2}{\sigma^2}

where (μ,σ)(\mu, \sigma) are global mean and standard deviation, and (μk,σk)(\mu_k, \sigma_k) are local statistics for stratum kk.

3. Elimination of Cross-Stratum Bias

A key contribution of SAN is the provable elimination of deterministic stratum offsets in advantage calculation. Considering the global advantage estimator AG(τ)=R(τ)RglobalA_G(\tau) = R(\tau) - R_{\text{global}}, it can be decomposed as:

AG(τ)=ASAN(τ)+(μ^k(x)Rglobal)A_G(\tau) = A_{\text{SAN}}(\tau) + (\hat{\mu}_k(x) - R_{\text{global}})

where (μ^k(x)Rglobal)(\hat{\mu}_k(x) - R_{\text{global}}) is the cross-stratum bias introduced by global baseline comparison. By using μ^k(x)\hat{\mu}_k(x) for centering, SAN ensures that trajectory rewards are only compared within homogeneous groups, removing systematic bias and yielding what the paper terms a "pure and scale-stable learning signal" (Zhu et al., 7 Oct 2025).

4. Comparison with Standard Policy Gradient Normalization

Standard global policy gradient normalization (e.g., REINFORCE with baseline, normalized advantage actor-critic) treats all trajectories identically, computing normalization statistics across the entire batch. This leads to credit assignment distortions when the statistics pool over heterogeneously structured trajectories. In settings with large variations—typically observed in LLM search, tool invocation environments, or non-trivial exploration spaces—such methods increase variance and introduce training instability.

In contrast, SAN's per-stratum normalization ensures that each policy update is conditionally unbiased and uniformly scaled within each homogeneous group. The authors further show that global unbiasedness and unit variance are preserved when aggregating across all strata, matching the guarantees of standard normalization but avoiding its drawbacks under structural heterogeneity.

SAN has also been extended to include linear blending with the global estimator when strata are sparsely populated, further stabilizing updates under finite-sample constraints (Zhu et al., 7 Oct 2025).

5. Empirical Evaluation and Effect on RL Dynamics

Comprehensive experiments have been conducted on seven question answering benchmarks encompassing both single-hop and multi-hop search-enhanced QA tasks. The results demonstrate that Stratified GRPO, powered by SAN, consistently and substantially outperforms standard GRPO in training reward, stability, and policy effectiveness.

  • Average training reward improves by up to 11.3 points over the baseline.
  • Multi-hop benchmarks exhibit up to 14.5 point gains in relative reward.
  • Training curves reveal smoother convergence, higher reward, and robust search policy learning (e.g., agents perform multi-hop search when previous approaches stagnate at one-hop).

Observed effects on learning dynamics include improved exploration, more effective credit assignment, and greater adaptation to complex task structures.

6. Practical Implications and Deployment Considerations

The adoption of SAN provides several practical benefits:

  • More stable policy optimization, especially in scenarios with significant trajectory structure variance.
  • Enhanced ability for search-augmented LLM agents to learn complex, multi-step reasoning strategies where performance would previously be stunted by biased, noisy learning signals.
  • Reduction in the need for aggressive tuning or regularization to counteract instability, as SAN provides intrinsically better credit assignment.

For practitioners, the method is straightforward to implement, requiring only partitioning of trajectories into meaningful strata and standard computation of empirical statistics. Optimization routines remain otherwise unchanged, and SAN is compatible with existing actor-critic policy-gradient frameworks.

To mitigate finite-sample variance when some strata are small, the paper advocates blending SAN with the global estimator, serving as a robustness measure.

7. Extensions and Future Research Directions

The paper identifies several promising avenues for further exploration:

  • Integrating SAN-style stratified normalization within actor-critic architectures such as PPO, particularly where value function approximation may suffer from analogous bias.
  • Developing dynamic, data-driven stratum assignment mechanisms to automatically partition trajectories during training based on evolving structural properties.
  • Refining blending strategies for robust estimation in low-data regimes.
  • Generalizing SAN principles to other RL domains involving tool use, retrieval, planning, or reasoning with LLM agents, potentially unifying the stratified approach across various forms of agent-environment heterogeneity.

A plausible implication is that stratification may further benefit multi-agent RL and hierarchical decision processes, where trajectory variance impedes the effectiveness of global statistics-based normalization.


In summary, Stratified Advantage Normalization (SAN) constitutes a rigorous solution to the problem of cross-stratum bias in RL for LLM-based agents and similar structurally heterogeneous environments. By partitioning trajectories and confining normalization to homogeneous groups, SAN ensures unbiased, stable, and effective policy optimization. This method is empirically validated as critical for advanced search-augmented agents and is extensible to a broad range of RL scenarios where trajectory diversity is intrinsic to the problem setting (Zhu et al., 7 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Stratified Advantage Normalization (SAN).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube