Papers
Topics
Authors
Recent
Search
2000 character limit reached

SeedLM Overview: Compression & Multimodal Modeling

Updated 19 December 2025
  • SeedLM is a dual-paradigm framework that compresses LLM weights using pseudo-random seeds and discretizes images into causal semantic tokens.
  • It employs a data-free, blockwise weight reconstruction method with lightweight LFSR and quantized coefficients, achieving 3–4 bits per weight with minimal accuracy loss.
  • The approach unifies vision-language autoregression under a single Transformer model, enabling efficient on-chip inference and scalable modality-agnostic deployment.

SeedLM designates two distinct paradigms related to LLMs: (1) a post-training weight compression technique that encodes model weights as seeds for pseudo-random generators, and (2) a vision-language approach in which image data are discretized into causal semantic tokens, allowing unified text-image modeling under a Transformer architecture. The shared principle across both is the use of discrete seeds (code indices or pseudo-random generator initializations), either for weight reconstruction or multimodal content representation, enabling efficient, scalable, and modality-agnostic LLM deployment (Shafipour et al., 2024, Ge et al., 2023).

1. Weight Compression via Seeds and Pseudo-Random Generators

The SeedLM compression algorithm enables encoding LLM weights with minimal accuracy loss using only a tiny seed and quantized coefficients per block. For each block of weights, the process is as follows (Shafipour et al., 2024):

  • Partition each weight matrix WW into blocks wRCw \in \mathbb{R}^C.
  • For each block, select a seed s{1,...,2K1}s \in \{1, ..., 2^K-1\} for a KK-bit Linear Feedback Shift Register (LFSR).
  • LFSR, initialized with ss, produces a deterministic integer matrix V(s)NC×PV(s) \in \mathbb{N}^{C \times P}, which is normalized to U(s)U(s) so that entries are in [1,1][-1,1].
  • Reconstruct each weight block by w^=U(s)t\hat{w} = U(s)t where tRPt \in \mathbb{R}^P is a small quantized coefficient vector.
  • Store only ss (the seed), a shared exponent ee (4 bits), and PP 4-bit two’s-complement coefficients q1,...,qPq_1, ..., q_P for each block.

Block selection, seed search, and coefficient quantization are performed offline. For K=16K=16, exhaustive search is practical since there are only 21612^{16}-1 possible seeds per block. Each block’s optimal (s^,t^)(\hat{s}, \hat{t}^*) minimizes wU(s)t22\,||w - U(s)t||_2^2\, using Moore–Penrose pseudoinverse and quantization.

The method achieves $3$–$4$ bits per weight—e.g., for M=4M=4 bits, use C=8C=8, P=3P=3, K=16K=16; for M=3M=3, C=12C=12, P=4P=4, K=16K=16. Importantly, this strategy is data-free: no calibration or activation statistics are required, in contrast to techniques such as AWQ, GPTQ, or OmniQuant (Shafipour et al., 2024).

2. On-Chip Inference and Memory-Bound Acceleration

During inference, SeedLM reconstructs each weight block on-the-fly via the lightweight LFSR (requiring just KK flip-flops and XOR gates), streaming out PP basis vectors, scaling by qj2eq_j \cdot 2^e, and summing. A 4-bit compressed model fits four times as many weights in a DRAM burst (128 vs. 32 per 64 B), dramatically reducing high-latency weight fetches from external memory. Idle DSP cycles, typically unavailable due to memory bottlenecks in 16-bit matmuls, are now utilized for basis generation and accumulation, adding minimal overhead.

On hardware, such as FPGA (AMD Virtex7), SeedLM’s 4-bit model achieves nearly 4×4\times speedup in measured matrix–matrix multiplication throughput compared to an FP16 baseline (e.g., 2048×20482048 \times 2048 matmul: $136,559$ cycles (FP16) vs. $34,331$ cycles (SeedLM 4-bit); 4×\approx 4\times) with resource usage well below capacity constraints (Shafipour et al., 2024).

3. Empirical Evaluation and Comparative Performance

SeedLM is evaluated across Llama 2 (7B, 13B, 70B) and Llama 3 (8B, 70B) on zero-shot tasks (ARC-Easy, ARC-Challenge, HellaSwag, WinoGrande, BoolQ) and perplexity benchmarks (WikiText-2, $2048$ seq-length):

  • 4-bit SeedLM retains $97$–99%99\% of FP16 accuracy (e.g., Llama 3 70B FP16: $79.51$, SeedLM: $78.06$ average).
  • Competing 4-bit approaches (AWQ, OmniQuant, QuIP#) lose $4$–$10$ points or cannot run (out-of-memory) on large models.
  • 3-bit SeedLM outperforms or matches calibration-based 3-bit techniques (e.g., Llama 2 70B: SeedLM $73.83$, AWQ $73.91$, OmniQuant $59.72$).
  • Perplexity: Llama 3 70B FP16 ($2.9$), SeedLM 4-bit ($3.8$), AWQ ($4.7$), OmniQuant (\infty, OOM).
  • Resource cost for on-chip LFSR and conversion logic is modest compared to overall hardware utilization (Shafipour et al., 2024).

These results confirm that SeedLM enables drastic model compression without accuracy degradation and with no need for calibration data.

4. Multimodal Seed Tokenization and the Vision-Language Paradigm

The SEED tokenizer, introduced in the context of SEED-LLaMA, discretizes images into a sequence of 1D causal, high-level semantic tokens ("SEED tokens"), which can be incorporated into LLMs' token streams identically to text. The tokenizer is VQ-based and optimized both for semantic alignment (contrastive InfoNCE loss with paired text) and accurate image reconstruction. 1D causal dependency is enforced using a Causal Q-Former, ensuring compatibility with left-to-right autoregressive modeling. The final distribution factorizes as

P(z1,z2,,zN)=i=1NP(ziz<i)P(z_1, z_2, \dots, z_N) = \prod_{i=1}^N P(z_i \mid z_{<i})

SEED tokens are allocated new entries (e.g., K=8192K = 8192 tokens), appended to LLaMA’s vocabulary. Pretraining and instruction tuning use interleaved text and SEED tokens (uiu_i), optimizing the standard next-token loss. Resulting models (e.g., SEED-LLaMA-8B/14B) attain high image captioning scores and compositional vision-language abilities on multiple benchmarks (Ge et al., 2023).

5. Unified Modality-Agnostic Language Modeling

SEED-LLaMA demonstrates that, by design, language modeling can be extended to unified vision-language autoregression under a single next-token prediction objective—without architectural modifications other than vocabulary expansion. This paradigm enables a single model to process natural language and images as interchangeable atomic units, yielding emergent capabilities such as multi-turn in-context multimodal generation, style transfer, image blending, and multimodal compositionality.

Implication: The SEED-LLaMA approach foreshadows a generalized SeedLM framework, where LLMs achieve modality-agnostic representation and reasoning by treating all inputs and outputs as discrete seeds, embodying both model parameters and content streams (Ge et al., 2023).

6. Advantages, Flexibility, and Hardware Suitability

SeedLM and SEED-LLaMA approaches share several advantages:

  • Data-free compression (SeedLM): No requirement for calibration/validation sets or activation statistics, enabling fully deterministic, offline weight encoding.
  • Generalizability: Compressed models and multimodal capabilities persist across model sizes ($7$B–$70$B), tasks (zero-shot, language modeling), and input modalities.
  • Hardware-friendliness: LFSRs for blockwise pseudo-random generation are natively supported in silicon; quantized coefficient computation requires only shifts and small arithmetic.
  • Flexible bit allocation: The block configuration (C,P,K)(C, P, K) can be selected to meet any target bit budget MM, according to M=(K+4+4P)/CM = (K + 4 + 4P)/C.
  • Scalability: On-chip compute vs. DRAM bandwidth trade-off is explicit and tunable, allowing inference speedup to approach theoretical maxima on large matrix products (Shafipour et al., 2024).

By encoding both parameters and multimodal content as discrete seeds, these approaches enable highly compressed, efficient, and versatile LLM deployments—including low-latency, high-throughput settings such as FPGA- or ASIC-based inference.


Key References:

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SeedLM.