Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sparse ReGLU-Based FFNs

Updated 18 March 2026
  • The paper introduces a novel sparse ReGLU-FFN layer that combines ReLU gating with dynamic neuron selection to cut compute by up to 7× while retaining competitive accuracy.
  • It employs an adaptive thresholding mechanism (CETT) to activate only significant neurons, achieving around 88% sparsity with less than 1% accuracy drop.
  • Hardware-aware optimizations such as sliding-window caching and block-sparse computation significantly enhance memory efficiency and inference speed.

Sparse ReGLU-based Feed-Forward Networks (FFNs) designate a class of FFN layers for large-scale neural LLMs in which the ReLU-Gated Linear Unit (ReGLU) activation is combined with explicit dynamic sparsity for computational and memory efficiency. This approach leverages the high activation sparsity inherent in ReGLU—where many neuron outputs are zero or near-zero—and introduces runtime selection of active neurons per token via thresholding mechanisms. As such, sparse ReGLU-FFNs often feature an order-of-magnitude reduction in compute and memory requirements while maintaining accuracy competitive with dense baselines, particularly in the context of LLMs (Zhang et al., 2024).

1. ReGLU Activation: Structure and Properties

In the ReGLU activation, the standard gated linear unit (GLU) paradigm is instantiated using the rectified linear unit (ReLU) as the gating nonlinear function. Given an input xRdmodelx \in \mathbb{R}^{d_\mathrm{model}} and learned weights W1,W2Rdff×dmodelW_1, W_2 \in \mathbb{R}^{d_\mathrm{ff} \times d_\mathrm{model}}, the ReGLU FFN computes, for each hidden neuron ii:

  • Gating: ai(x)=ReLU(w2,iTx)=max(0,w2,iTx)a_i(x) = \mathrm{ReLU}(w_{2,i}^T x) = \max(0, w_{2,i}^T x)
  • Value: ni(x)=ai(x)(w1,iTx)n_i(x) = a_i(x) \cdot (w_{1,i}^T x)

The FFN output is then:

FFNReGLU(x)=Wout[n1(x);n2(x);ndff(x)]Rdmodel\mathrm{FFN_{ReGLU}}(x) = W_\mathrm{out} [n_1(x); n_2(x); \dots n_{d_\mathrm{ff}}(x)] \in \mathbb{R}^{d_\mathrm{model}}

Key attributes:

  • Piecewise-linearity provides large compact regions of exact zero, enabling efficient sparsity.
  • The multiplicative gating produces an “information highway” effect, allowing expressive adaptation per-token.
  • Empirically delivers pretraining loss comparable to SwiGLU and ReLU², with higher intrinsic activation sparsity (Zhang et al., 2024).

2. Sparse-Activation Framework in ReGLU FFNs

Sparse ReGLU-FFN models depart from classical sparsity notions predicated solely on ReLU outputs being exactly zero. Instead, neuron “inactivity” is determined by the output magnitude mi(x)=ni(x)2m_i(x) = \lVert n_i(x) \rVert_2 relative to an adaptively chosen threshold τ\tau_\ell for layer \ell. A neuron is considered skipped for a given token if mi(x)<τm_i(x) < \tau_\ell.

To select τ\tau_\ell, the cumulative error of tail truncation (CETT) criterion is used. For a given threshold τ\tau,

CETT(τ)=FFN(x)FFNtrunc(x;τ)2FFN(x)2\mathrm{CETT}(\tau) = \frac{ \lVert \mathrm{FFN}(x) - \mathrm{FFN_{trunc}}(x; \tau) \rVert_2 }{ \lVert \mathrm{FFN}(x) \rVert_2 }

where FFNtrunc\mathrm{FFN_{trunc}} omits all neurons ii with mi(x)<τm_i(x) < \tau. Empirical results show CETT 0.2\lesssim 0.2 induces minimal (circa 1%) accuracy loss up to 88%\sim88\% average sparsity for ReGLU (Zhang et al., 2024).

3. Implementation: Dynamic Sparse Inference Pipeline

Sparse ReGLU-FFN inference proceeds as follows:

  1. Gating computation: g=W2xg = W_2^\ell \cdot x
  2. Mask determination: mask=(g>0)(gτ)mask = (g > 0) \wedge (|g| \geq \tau_\ell), yielding active neuron indices.
  3. Block gather: Rows/columns corresponding to active indices are retrieved for W1W_1, W2W_2, and WoutW_\mathrm{out}
  4. Value projection: usub=W1[active,:]xu_\text{sub} = W_1^\ell[\text{active},:] \cdot x
  5. Gated outputs: nsub=ReLU(gsub)usubn_\text{sub} = \mathrm{ReLU}(g_\text{sub}) \odot u_\text{sub}
  6. Output accumulation: Dense or blockwise matrix-multiplication (with WoutW_\mathrm{out} columns for active neurons)

Batch processing and windowed reuse of active indices across tokens further improve cache efficiency and hardware throughput. Block sizes are aligned to the accelerator’s architectural granularity, e.g., warp size (Zhang et al., 2024).

4. Sparsity–Accuracy Trade-offs and Empirical Metrics

Empirical investigation across several activation functions yields that, at CETT=0.2=0.2:

  • ReGLU achieves 88%\sim88\% sparsity with only a 0.9% performance drop, outperforming SwiGLU (75%, 0.8% drop) and ReLU (82%, 0.7% drop).
  • FLOP reduction is directly proportional to the sparsity level. For ReGLU, up to 7×7\times hidden-layer computation reduction is observed, and 90% reduction in I/O (weight movement) can be achieved by combining reuse and block co-activation locality strategies.
  • End-to-end single-token inference speedup reaches $5$–6×6\times on modern accelerators for batch size $1$ (Zhang et al., 2024).

A representative summary table of activation characteristics is as follows:

Activation Dense Perf. Sparsity at CETT=0.2 End-to-End Accuracy Drop
SwiGLU 100% 75% 0.8%
ReLU 99.3% 82% 0.7%
ReGLU 99.1% 88% 0.9%
ReLU2^2 99.4% 92% 0.6%

All listed metrics are for 1.3B-parameter models (Zhang et al., 2024).

5. Hardware Optimization and Memory Considerations

Sparse ReGLU-FFNs benefit from both algorithmic and hardware-aware optimizations:

  • Parameter reuse: Sliding-window caching across tokens achieves a reuse ratio of $0.38$ in ReGLU layers, which is higher than SwiGLU ($0.25$). For a window size K=5K=5, the reuse is more than double compared to K=1K=1.
  • Co-activation block layout: By arranging neurons with high joint-activation probability in contiguous storage, blockwise loads minimize random-access overhead—yielding up to 92%92\% reduction in weight movement.
  • Block size alignment: Optimal neuron-block size matches accelerator thread unit, maximizing tensor-core utilization (Zhang et al., 2024).

The index-gather and block-sparse computations are tuned so that their overhead remains less than the compute saved.

6. Production Guidelines and Best Practices

Deploying sparse ReGLU-FFNs involves the following practices:

  1. Per-layer threshold selection using CETT capped at about $0.2$.
  2. Predictor-based pruning: A small MLP predicts likely-inactive neurons, further improving efficiency by another $50$–60%60\% while maintaining >85%>85\% recall. This reduces unnecessary compute at inference with negligible error increase.
  3. Windowed index reuse and block-locality enforcement to further minimize memory traffic and improve hardware throughput.
  4. Avoid overtuning: Lowering CETT well below $0.1$ results in diminished returns and sharply reduced accuracy.
  5. Verify recall: Ensure that the predictor’s false-negative rate is below 5%5\% to keep accuracy penalties minimal (<0.5%<0.5\%) (Zhang et al., 2024).

7. Comparative Context and Extensions

ReGLU-based sparse FFNs represent a point along the spectrum of activation sparsity, with SwiGLU-based MoC FFNs (Wu et al., 12 Nov 2025) serving as a notable alternative. While MoC leverages the intrinsic sparsity of SwiGLU via top-KK gating per token and achieves 3×3\times4×4\times reduction in FFN activation memory and 1.3×1.3\times1.5×1.5\times inference speedup, ReGLU-based methods yield even higher effective sparsity (up to 88%88\%) but require threshold-based pruning and prediction infrastructure. Both approaches are compatible with standard optimizers (AdamW with cosine decay), mixed precision, and gradient checkpointing, making them amenable to production-scale LLM pretraining and inference (Wu et al., 12 Nov 2025, Zhang et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sparse ReGLU-Based FFN.