Papers
Topics
Authors
Recent
Search
2000 character limit reached

SignReLU Activation Function

Updated 5 February 2026
  • SignReLU is a neural activation function defined with a linear region for positives and a saturating rational branch for negatives, enabling efficient ratio approximations.
  • Its design preserves monotonicity and stability, providing smooth transitions and bounded negative outputs to improve training and robustness.
  • Empirical results show that SignReLU outperforms traditional activations in tasks like noisy regression, image classification, and generative modeling.

The SignReLU activation function is a rational-type neural activation that combines a linear identity mapping for positive inputs with a saturating rational branch for negatives. Formally, for a scalar parameter α>0\alpha>0, the function is defined as

SignReLU(x;α)={x,x>0 αx/(1+x),x0\mathrm{SignReLU}(x;\alpha) = \begin{cases} x, & x > 0\ \alpha\,x/(1+|x|), & x \le 0 \end{cases}

This piecewise structure enables both efficient representation of univariate rational functions and stable handling of ratio-type targets, making the activation especially well-suited for networks tasked with approximations involving division, such as those encountered in conditional generative modeling and diffusion-based generative models (Sun et al., 29 Jan 2026, Li et al., 2022).

1. Mathematical Properties and Structure

SignReLU exhibits a two-region structure:

  • For x>0x>0, the function is linear: σ(x)=x\sigma(x)=x.
  • For x0x\le0, the function is rational and saturating: σ(x)=αx/(1+x)[α,0)\sigma(x)=\alpha\, x/(1+|x|)\in[-\alpha,0), which flattens as xx\to -\infty.

At x=0x=0, both one-sided limits match at $0$, providing global continuity. The derivative is

σ(x)={1,x>0 α(1+2x)/(1+x)2,x<0\sigma'(x) = \begin{cases} 1, & x > 0\ \alpha (1 + 2|x|)/(1+|x|)^2, & x < 0 \end{cases}

At x=0x=0, σ+(0)=1\sigma'_+(0)=1 and σ(0)=α\sigma'_-(0)=\alpha; thus, for α=1\alpha=1, SignReLU is C1C^1 everywhere, although not analytic at the origin.

The negative branch, being a simple rational function, allows for efficient implementation of approximate division or product gates while avoiding the piecewise-linear complexity of standard ReLU chained approximations to similar operations (Li et al., 2022).

Monotonicity is preserved (σ(x)>0\sigma'(x)>0 for all xx), and negative values are squashed into a bounded interval – as xx\to-\infty, σ(x)α\sigma(x)\to-\alpha. This property suppresses large negative activations, a feature that may contribute to training stability and robustness.

2. Expressivity and Approximation Capabilities

SignReLU networks are demonstrated to possess superior approximation capabilities relative to both classic ReLU and general rational-activation networks (Li et al., 2022). Several key constructive results underpin this claim:

  • Exact Division and Product Gates: A depth-$6$, width-$9$ SignReLU subnetwork can compute (x,y)y/x(x,y)\mapsto y/x exactly for xx on any compact interval [c,C][c,C], and a similar construction computes (x,y)xy(x,y)\mapsto xy with smaller depth and width (Li et al., 2022).
  • Universal Approximation for Ratios: For f1,f2f_1, f_2 drawn from integral-kernel smooth classes with f2f_2 bounded away from zero, there exists a SignReLU network of depth $7$ and width O(n+9)O(n+9) achieving

f1f2ϕLp([1,1]d)M1n(1/2+3/(2d))\left\|\tfrac{f_1}{f_2}-\phi\right\|_{L^p([-1,1]^d)} \precsim M^{-1}n^{-(1/2 + 3/(2d))}

for network parameter norm bound MM and tunable nn, matching optimal linear rates in nn (Sun et al., 29 Jan 2026).

Further approximation results include optimal rates (with no logarithmic penalty) in uniform and LpL^p norms for Sobolev and Korobov function classes, and efficient approximation of rank-one tensor models and piecewise-smooth functions (Li et al., 2022). These efficiencies are enabled by direct implementation of rational nonlinearities, as opposed to the Ω(log(ϵ1))\Omega(\log(\epsilon^{-1})) growth in ReLU size to even approximate quadratic or division functions.

3. Implementation in Neural Architectures

SignReLU can be integrated into standard feedforward, fully connected, and ResNet-type architectures without modification to layer sizes, parameter initializations, or optimization methdologies. The replacement of ReLU units with SignReLU incurs only a minor computational overhead, as the negative branch requires only a division and multiplication per neuron (compared to the exponential in ELU or the logarithm in Softplus) (Li et al., 2022). In practical frameworks, this cost is comparable to other nonlinear smooth activations.

In architectures requiring rational or ratio-type functional representations—such as models for denoising diffusion probabilistic models (DDPMs)—SignReLU's rational regime is leveraged prominently. For example, optimal reverse-transition means at time tt take the functional form of conditional kernel ratios, for which a stack of one linear-approximation layer followed by a division-gate subnetwork (depth $7$, width O(n+9d)O(n+9d)) can be constructed (Sun et al., 29 Jan 2026).

Parameter norm regularization is sometimes employed to control the rational branch’s tail behavior, mitigating blowups in low-density regions.

4. Empirical Performance and Numerical Experiments

Experiments show that SignReLU achieves competitive or superior performance relative to ReLU, Leaky ReLU, and ELU across several settings (Li et al., 2022):

  • Noisy regression (high-dimensional, d=50d=50, $100$, $1000$): SignReLU and ELU yield lower mean-squared error and variance than ReLU or LeakyReLU.
  • Image classification (MNIST, CIFAR-10): Test accuracy for SignReLU is highest among the four compared (MNIST: 97.92%97.92\%, CIFAR-10: 76.57%76.57\%).
  • Spherical image denoising: Using a U-Net style convolutional framelet network, SignReLU matches or slightly exceeds ReLU and ELU in terms of PSNR at moderate noise levels.

The practical implication is that SignReLU's expressivity translates into improved accuracy, noise robustness, and training stability in scenarios where the model must recover or approximate non-linear rational structure.

5. Comparison to Other Activation Functions

The following table summarizes key analytic and implementation properties observed across several activations (Li et al., 2022).

Activation Negative Branch Bounded Tail Gradient (x<0)
ReLU $0$ Yes $0$
Leaky ReLU λx\lambda x No λ\lambda
ELU α(ex1)\alpha(e^x-1) Yes αex\alpha e^x
SignReLU x/(1x)x/(1-x) or αx/(1+x)\alpha x/(1+|x|) Yes 1/(1x)21/(1-x)^2

Unlike ReLU, which is non-differentiable at zero and maps all negatives to zero, or Leaky ReLU, which is unbounded below, SignReLU maintains strict monotonicity, C1C^1 smoothness (for α=1\alpha=1), and negative outputs saturated in (α,0)(-\alpha,0). Compared to ELU, SignReLU uses only elementary rational operations, making it computationally economical in modern hardware and software environments.

6. Theoretical Limitations and Open Questions

All optimal approximation guarantees for SignReLU are derived in the “continuum” setting and do not assert convergence of practical training algorithms, such as SGD, to the optimal regime. Whether empirical risk minimization or SGD can efficiently recover the function classes constructed in the mathematical existence proofs remains an open theoretical question (Li et al., 2022).

SignReLU is C1C^1 everywhere but not analytic at x=0x=0; potential consequences for learning or approximation of highly smooth functions—and its applicability to PDE solvers—warrant further investigation. Hybrid architectures blending SignReLU with other activations may provide routes to even greater expressivity and optimization benefits.

7. Applications in Diffusion Models and Structured Generative Learning

In generative modeling, notably DDPMs, the reverse process requires estimating a conditional expectation that is naturally a ratio of integrals—precisely the setting where SignReLU excels. The model

fρ(zt)=xq0(x)q(ztx)dxq0(x)q(ztx)dxf_\rho(z_t) = \frac{\int x\,q_0(x)\,q(z_t|x)\,dx}{\int q_0(x)\,q(z_t|x)\,dx}

is efficiently approximated by a depth-$7$, width-O(n+9d)O(n+9d) SignReLU network, in which a first layer estimates the required kernel integrals and the division gate is implemented via a fixed (depth $6$, width $9$) subnetwork (Sun et al., 29 Jan 2026). The resulting network demonstrates near-optimal LpL^p approximation rates and enables a decomposition of excess KL risk into explicit estimation and approximation error components.

Regularization of parameter norms and architectural constraints are employed to preserve stability in tail regions where denominator functions are small but bounded away from zero. Standard backpropagation with Adam optimization is used, and the architectures do not require modification relative to those designed for ReLU, other than the choice of activation.

The results demonstrate that the two-piece structure of SignReLU activation is uniquely effective for deep learning tasks involving ratio-type functional targets, enabling both empirical and theoretical advances in sample and computational efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SignReLU Activation Function.