Papers
Topics
Authors
Recent
Search
2000 character limit reached

Seed-Coder Architecture Overview

Updated 23 January 2026
  • Seed-Coder Architecture is a large-scale language modeling framework that combines autoregressive pretraining with block diffusion continual pretraining to model code efficiently.
  • It shifts from next-token cross-entropy to block-wise masked diffusion learning, enabling parallel generation and robust intra-block bidirectional reasoning.
  • The architecture employs a tailored warmup schedule and curriculum learning, yielding improved performance on code benchmarks and ease of adapting AR weights.

The Seed-Coder architecture is a large-scale language modeling framework designed for high-throughput, diffusion-based continual pretraining and supervised fine-tuning, particularly in code modeling and generation. It provides a unified pipeline for transitioning from autoregressive (AR) next-token modeling to block-wise masked diffusion learning, allowing for parallel generation and robust intra-block bidirectional reasoning. This architecture is reused in the Stable-DiffCoder model, which augments Seed-Coder with diffusion continual pretraining (CPT), optimized warmup, and a block-wise clipped noise schedule to enable stable and efficient adaptation from AR to block-diffusion objectives (Fan et al., 22 Jan 2026).

1. Autoregressive Pretraining and the Seed-Coder Baseline

Seed-Coder initiates model training according to the AR paradigm, employing a standard next-token cross-entropy objective over mixed-domain data, including extensive code datasets. The AR phase ends with a pre-annealing checkpoint, which encodes sequential dependencies and foundational token representations. This checkpoint serves as a high-quality initialization for subsequent block diffusion continual pretraining (CPT), facilitating knowledge preservation and efficient reuse (Fan et al., 22 Jan 2026).

The AR modeling phase is critical for aligning the model’s weight space to high-data regimes and consistent context modeling prior to diffusion adaptation, supporting learning gradients that are stable and suitable for resume-training on code-specific data.

2. Block Diffusion Continual Pretraining (CPT) Formulation

In CPT, Seed-Coder architecture transitions from AR next-token objectives to block-wise masked diffusion learning. The block diffusion process operates on sequences x1:N0x_{1:N}^0, introducing partial corruption confined to a contiguous block B\mathcal{B} of size BB at each update.

Mathematical Process:

  • Forward Corruption: For each data batch, a global noise level tUniform[0,1]t \sim \text{Uniform}[0,1] is sampled. The per-block mask rate is computed via a clipped schedule:

ublk(t)=min(1,max(u(t),1/B))u_{blk}(t) = \min(1, \max(u(t), 1/B))

where u(t)u(t) is typically linear (e.g., u(t)=1tu(t) = 1-t). In B\mathcal{B}, each position ii is masked with probability ublk(t)u_{blk}(t). If no token is masked, one is force-masked.

  • Reverse (Denoising) and Loss: The block diffusion DLM is optimized by maximizing the weighted cross-entropy over masked positions:

LDLLM(θ)=Ex0pdata,tU[0,1],xtq(xtx0)[w(t)i=1N1[xit=MASK]logpθ(xi0xt)]L_{DLLM}(\theta) = - \mathbb{E}_{x^0 \sim p_{data}, t \sim U[0,1], x^t \sim q(x^t|x^0)} \left[ w(t) \sum_{i=1}^N \mathbb{1}[x^t_i=\text{MASK}] \log p_\theta(x^0_i | x^t) \right]

where w(t)=1/ublk(t)w(t) = 1/u_{blk}(t).

Clipping the mask rate guarantees nontrivial supervision per block (expected 1\geq 1 masked token per block) and bounds w(t)Bw(t) \leq B, preventing degenerate learning dynamics.

3. Warmup and Block-Wise Noise Schedules

The Seed-Coder architecture implements a tailored warmup schedule at the start of CPT to mitigate large initial losses and gradient spikes arising from the abrupt shift in objective and attention mask (causal \rightarrow bidirectional), as well as increased corruption difficulty.

Warmup Procedure:

  • Over SwS_w steps (typically thousands), cap umax(0)=uinitu_{\max}(0) = u_{init} (e.g., 10310^{-3}), then increment umax(s)u_{\max}(s) linearly to 1. At step ss:

umax(s)=uinit+(1uinit)sSwu_{\max}(s) = u_{init} + (1-u_{init}) \frac{s}{S_w}

  • Sample tUniform(0,umax(s))t \sim \text{Uniform}(0, u_{\max}(s)) and apply ublk(t)u_{blk}(t); drop weighting w(t)w(t) from loss.

During warmup, the model optimizes:

Lwarmup(θ)=Ex0,tU[0,umax],xt[i=1N1[xit=MASK]logpθ(xi0xt)]L_{warmup}(\theta) = - \mathbb{E}_{x^0, t \sim U[0, u_{\max}], x^t} \left[ \sum_{i=1}^N \mathbb{1}[x^t_i=\text{MASK}] \log p_\theta(x^0_i | x^t) \right]

This approach smooths the transition from AR to DLLM, stabilizing gradient norms and yielding a characteristic \sqrt{\cdot}-shaped loss curve.

4. Curriculum, Block Size, and Attention Masking

Central to Seed-Coder diffusive adaptation is the scheduling of block size and the context-causal attention mask. CPT proceeds from AR (block size =1=1) and incrementally increases BB following a growth curriculum, balancing context alignment and augmentation signal. Empirical ablations demonstrate strongest knowledge compression under small-block CPT (e.g., B=4B = 4), while large blocks dilute context informativeness (Fan et al., 22 Jan 2026).

The block-causal attention mechanism imposes strict causality on committed context tokens, with bidirectional attention allowed only within each active block at inference and training. This preserves left-to-right generation order with intra-block refinement capabilities.

Block Diffusion CPT Table

Stage Objective Attention Structure
AR Pretraining Next-token cross-entropy Strict causal
Block Diffusion Weighted masked denoising (block-wise) Context-causal + intra-block bidirectional
Fine-Tuning Next-token cross-entropy (instructions/tasks) Standard causal

5. Integration with Supervised Fine-Tuning and Downstream Impact

After block diffusion CPT, the resulting Seed-Coder weights—θCPT\theta_{CPT}—serve as the initialization for supervised fine-tuning (SFT). The SFT phase reuses the original Seed-Coder instruction and task datasets, packing examples and augmenting outputs for variable-length prediction. SFT employs a standard next-token objective, consolidating code knowledge acquired during CPT.

The CPT procedure introduces strong multi-token block prediction and stochastic augmentation, conferring any-order modeling capabilities that benefit infilling, editing, and structured code reasoning tasks. Empirical benchmarks demonstrate consistent improvements on HumanEval(+), MBPP(+), CRUXEval, MultiPL-E, and code-editing tasks, with performance surpassing autoregressive baselines of matching size and various larger DLMs (Fan et al., 22 Jan 2026).

6. Ablations, Practical Recommendations, and Model Efficiency

Studies on Seed-Coder within Stable-DiffCoder show that:

  • Block Size: Small blocks (B=1,2B = 1,2) maintain high AR-like context and efficient knowledge transfer; large blocks (B=32B=32) reduce context informativeness and slow learning.
  • Curriculum: The AR phase followed by small-block CPT yields optimal trade-offs in efficiency and modeling quality.
  • Noise Schedule: Linear schedules clipped per block stabilize supervision.
  • Warmup: A short warmup phase smooths AR\rightarrowDLLM transition and matches AR CPT stability.

For practitioners, the Seed-Coder CPT pipeline is compute‐efficient (amortized KV-cache construction, maximal token utilization), supports strict train-inference consistency (identical attention mask logic at all stages), and seamless reuse of AR weights (no architecture change except diffusion/output heads and masking logic).

A plausible implication is that the Seed-Coder architecture, coupled with block diffusion CPT and principled scheduling, offers a robust and scalable framework for long-context modeling and non-sequential generation in code LLMs, producing models that are competitive and sometimes superior to both AR-trained and diffusion-from-scratch counterparts (Tian et al., 7 Dec 2025, Fan et al., 22 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Seed-Coder Architecture.