Papers
Topics
Authors
Recent
2000 character limit reached

BitNet b1.58: Ternary Quantization for LLMs

Updated 7 January 2026
  • BitNet b1.58 is a quantization-aware training approach that discretizes each weight to {-1, 0, +1}, representing ~1.58 bits per parameter.
  • It employs scaling techniques, shadow weights, and a straight-through estimator to achieve near full-precision performance while reducing memory and energy consumption.
  • Empirical results demonstrate state-of-the-art efficiency and accuracy on LLM benchmarks, enabling hardware-tailored inference and effective deployment.

BitNet b1.58 is a quantization-aware training methodology and model architecture for LLMs and other neural networks, in which every weight is discretized to one of three values: {1,0,+1}\{-1, 0, +1\}. Each parameter thus stores log231.58\log_2 3 \approx 1.58 bits of information per weight, defining a new Pareto frontier in efficiency for both memory and computation while preserving accuracy on par with full-precision (16/32-bit) models for sufficiently large model sizes. BitNet b1.58 models, including variants up to 2 billion parameters, have demonstrated state-of-the-art tradeoffs in perplexity, downstream benchmark performance, inference speed, and energy usage, and the quantization regime has become a substrate for hardware-tailored inference and training pipelines, distillation methods, and fault-tolerant designs.

1. Quantization Principle and Mathematical Formalism

BitNet b1.58 employs a strict ternary quantization of weights in linear layers. Given a real-valued weight matrix WRn×mW \in \mathbb{R}^{n\times m}, a scaling parameter γ\gamma is computed, typically as the mean of the absolute value of WW (alternative: median). Quantization proceeds as:

γ=1nmi,jWij,Wij(tern)=Q(Wij;γ)={+1,Wij>+γ 0,Wijγ 1,Wij<γ \gamma = \frac{1}{nm} \sum_{i,j} |W_{ij}|, \qquad W^{\text{(tern)}}_{ij} = Q(W_{ij}; \gamma) = \begin{cases} +1, & W_{ij} > +\gamma \ 0, & |W_{ij}| \leq \gamma \ -1, & W_{ij} < -\gamma \ \end{cases}

This discretization ensures each weight has three equiprobable values, yielding an average of log231.585\log_2 3 \approx 1.585 bits per parameter (Ma et al., 2024, Nielsen et al., 2024, Nielsen et al., 2024). The quantization mapping is performed during every forward pass in quantization-aware training (QAT), with "shadow" 16-bit weights maintained for gradient updates.

Activations are similarly quantized, typically to 8 bits per token via absolute-max scaling, enabling the forward computation to remain within integer domains for both weights and activations (Ma et al., 2024, Nielsen et al., 2024, Ma et al., 16 Apr 2025).

A straight-through estimator (STE) passes gradients through the quantizer in backpropagation, enabling the model to be trained end-to-end in low precision and to leverage the implicit regularization effect from ternary rounding (Nielsen et al., 2024).

2. Model Architecture, Training Workflow, and Variants

BitNet b1.58 has been instantiated in a variety of architectures:

  • Decoder-only Transformers: Large-scale LLMs (e.g., BitNet b1.58 2B4T with 2B parameters), following LLaMA-like backbones (Ma et al., 16 Apr 2025).
  • Encoder-only and Encoder-decoder Transformers: Including BERT- and T5-like models (Nielsen et al., 2024).
  • Feedforward models and GNNs: MLPs and graph convolutional nets have been quantized using the same ternary regime (Nielsen et al., 2024).
  • Small models: Median-based scaling improves quantization robustness for models as small as 100K–48M parameters (Nielsen et al., 2024).

The canonical quantization-aware training pipeline is:

  1. Each linear or projection layer is replaced by a "BitLinear" module, which computes wscalew_{\text{scale}} via (mean or median) and quantizes each parameter.
  2. Activations are quantized per forward pass (absmax scaling to int8).
  3. Standard backward pass with STE.
  4. Shadow weights (16-bit or bf16) are retained throughout training, but only quantized weights are kept for inference.

Training from scratch with 1.58 bits is possible and yields competitive results, but starting with several epochs of 16-bit pre-training before transitioning to ternary QAT ("16-to-1.58" strategy) further enhances downstream accuracy, nearly matching full-precision in most cases with only a 2–3 point drop in aggregate performance across standard language understanding and reasoning tasks (Nielsen et al., 17 Feb 2025).

3. Empirical Performance and Scaling Laws

Extensive benchmarking shows that BitNet b1.58 LLMs match or outperform full-precision models as size increases beyond 3B parameters (Ma et al., 2024, Ma et al., 16 Apr 2025). Key results include:

  • At 2B parameters and 4T tokens, BitNet b1.58 2B4T yields performance within 1–2 points of state-of-the-art full-precision LLMs on MMLU, GSM8K, HumanEval+, and other benchmarks, while reducing memory footprint from ~2 GB to 0.4 GB and halving or better the latency and energy per token (Ma et al., 16 Apr 2025).
  • On encoder-decoder and encoder-only transformers, b1.58-median outperformed or matched 16-bit models at matched parameter count; for some models, no extra width is needed, while for others, doubling the hidden size is sufficient (Nielsen et al., 2024, Nielsen et al., 2024).
  • Regularization: Ternary QAT imparts an implicit regularization effect, reducing overfitting in LMs and yielding delayed validation loss minima compared to full-precision baselines (Nielsen et al., 2024).
  • Small models require careful tuning (median scaling, higher width, hyperparameter grid), but state-of-the-art or better accuracy is achievable (Nielsen et al., 2024).

The scaling law for loss L(N,T)L(N,T) as a function of parameter count NN and token count TT is preserved under 1.58-bit quantization; the same exponents (α,β)(\alpha, \beta) apply as in full-precision training, as long as N3BN \gtrsim 3\,\text{B} (Ma et al., 2024).

Table: Representative Performance (BitNet b1.58 2B4T, 2B params)

Metric BitNet b1.58 2B4T LLaMA 3.2 1B Qwen 2.5 1.5B
Memory (GB) 0.4 2.0 2.6
Latency (ms) 29 48 65
Energy (J) 0.028 0.258 0.347
MMLU (%) 53.17 45.58 60.25
GSM8K (%) 58.38 38.21 56.79
HumanEval+ (%) 38.40 31.10 50.60
Average 54.19 44.90 55.23

(Ma et al., 16 Apr 2025)

4. Inference, Deployment, and Hardware Co-design

BitNet b1.58 unlocks efficient inference on edge and server CPUs and serves as a foundation for hardware-specific runtimes:

  • Bitnet.cpp provides optimized ternary (1.58b) inference kernels for x86 and ARM CPUs (Wang et al., 2024, Wang et al., 17 Feb 2025). Three kernels are available:
    • I2_S: 2-bit unpack, lossless, multiplies and accumulates in FP32.
    • TL1/TL2: Element-wise lookup tables for blocks of weights (g=2,3), packing at 2.00 and 1.67 bits per weight respectively, using precomputed scaling and SIMD instructions (AVX2/NEON).
  • Speedups: Up to 6.25× (I2_S) vs. fp16 and 2.32× (TL2) vs. generic low-bit baselines on common CPUs. All kernels match full-precision outputs within 0.01 PPL and 0.1% classification accuracy (Wang et al., 17 Feb 2025).
  • Energy: 55–82% reduction in Joules per token compared to fp16 on a wide range of architectures (Wang et al., 2024).
  • Model weights are distributed in gguf format, with native runtimes (bitnet.cpp) and PyTorch extension code published for reproducibility (Ma et al., 16 Apr 2025).
  • Because the ternary quantization can be implemented using two bits per parameter (with one value unused), packing strategies and alignment with SIMD widths require careful attention for maximal memory efficiency (Wang et al., 17 Feb 2025).

The ternary regime enables the use of custom accelerators that forgo floating-point multipliers, instead relying on integer addition, sign multiplication, and direct support for skipping zero weights, vastly reducing both DRAM–SRAM bandwidth and arithmetic complexity (Ma et al., 2024, Ma et al., 16 Apr 2025).

5. Extensions, Distillation, and Fault Tolerance

BitNet b1.58 has catalyzed multiple algorithmic and practical innovations:

  • Distillation (BitDistill): A pipeline that distills off-the-shelf full-precision LLMs (e.g., Qwen) to 1.58 bits for downstream tasks. BitDistill uses SubLN (sub-layer normalization), multi-head attention distillation, and continual pre-training on a general corpus, recovering full-precision accuracy while yielding up to 10×10\times memory reduction and 2.65×2.65\times faster CPU inference (Wu et al., 15 Oct 2025).
  • Direct Quantized Training (DQT): Elimination of "shadow" high-precision weights via stochastic rounding. While pure ternary DQT incurs a performance gap, at 8 bits DQT matches BitNet b1.58 accuracy with only a 5% relative degradation and drastically reduced training memory, showing minimal deployment accuracy loss under ternary casting (Zhao et al., 2024).
  • Fault Tolerance (ReTern): BitNet b1.58 deployed on TCiM (ternary compute-in-memory) hardware is vulnerable to stuck-at faults. The ReTern protocol combines zero-fix (bit-cell redundancy to encode zeros in alternate states) and fault-aware sign transformations (column-wise sign flips to maximize masking of faults), achieving a 35% reduction in fault-induced perplexity increases at <3% energy and <1% area cost (Malhotra et al., 1 Jun 2025).
  • Neuromorphic Applications: The Word2Spike framework uses BitNet b1.58 to quantize word embeddings for spiking neuromorphic hardware, preserving 97% semantic similarity and enabling fully spike-based associative memory models (Kalra et al., 9 Sep 2025).

6. Limitations, Trade-offs, and Future Work

  1. Expressivity: Although BitNet b1.58 supports most LLM and transformer workloads at scale, very small and capacity-limited networks may experience reduced performance unless width or network size is increased (Nielsen et al., 2024, Nielsen et al., 2024).
  2. Training Overhead: QAT requires computing layerwise scaling per batch and retains shadow weights; DQT approaches ameliorate this but may underperform in pure ternary (Zhao et al., 2024).
  3. Regularization: Ternary quantization zeroes small weights, acting as an implicit regularizer, which can both stabilize and slow training convergence (Nielsen et al., 2024).
  4. Layerwise Sensitivity: For small vision and SLMs, the choice of median vs mean scaling, and tuning of learning rate and weight decay, is critical for stability (Nielsen et al., 2024).
  5. Hardware Realization: Achieving theoretical compression ratios and speedup hinges on innovations in storage (sub-2 bit packing) and kernel design (SIMD alignment, LUT construction) (Wang et al., 17 Feb 2025).
  6. Rate-Optimal Lower Bounds: One-bit algorithm-unrolling schemes can, by exploiting structural sparsity, reduce the required bit-per-link below the 1.58 limit—suggesting further compression is possible in structured architectures, but not yet for arbitrary LLMs (2502.01908).

Open directions include mixed/blockwise/fractional precision regimes, real-time fault adaptation, joint hardware–algorithm co-design, and deeper integration with attention mechanisms and retrieval-augmented models (Ma et al., 2024, Wu et al., 15 Oct 2025, Malhotra et al., 1 Jun 2025).

7. Summary Table: Key Quantization and Deployment Strategies

Variant Memory / Weight Training Regime Inference Kernel Notable Results Ref
BitNet b1.58 \sim1.58 bits QAT + STE custom, bitnet.cpp SOTA LLM/Transformers, 3–4×\times eff. (Ma et al., 2024, Ma et al., 16 Apr 2025)
BitNet 16→1.58 \sim1.58 bits 16-bit pretrain + QAT same Nearly matches full-precision, >90% eff. (Nielsen et al., 17 Feb 2025)
BitDistill \sim1.58 bits Distill + SubLN bitnet.cpp <0.2pt loss, 10×\times mem save (Wu et al., 15 Oct 2025)
BitNet DQT (8 bit) 8 bits No shadow; stochastic int8 kernels 5% loss vs. BitNet b1.58, 4×\times mem save (Zhao et al., 2024)
ReTern on TCiM \sim1.58 bits QAT + offline mask n/a (hardware) 35% lower PPL fault rise, <3% power (Malhotra et al., 1 Jun 2025)

BitNet b1.58 thus establishes the foundational methodology for high-accuracy, resource-efficient, and hardware-amenable ternary LLMs and neural networks, setting a quantization and systems paradigm that is rapidly shaping both algorithmic and hardware landscapes.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to BitNet b1.58.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube