Papers
Topics
Authors
Recent
Search
2000 character limit reached

Tangma: A Tanh-Guided Activation Function with Learnable Parameters

Published 2 Jul 2025 in cs.NE, cs.LG, and cs.CV | (2507.10560v1)

Abstract: Activation functions are key to effective backpropagation and expressiveness in deep neural networks. This work introduces Tangma, a new activation function that combines the smooth shape of the hyperbolic tangent with two learnable parameters: $α$, which shifts the curve's inflection point to adjust neuron activation, and $γ$, which adds linearity to preserve weak gradients and improve training stability. Tangma was evaluated on MNIST and CIFAR-10 using custom networks composed of convolutional and linear layers, and compared against ReLU, Swish, and GELU. On MNIST, Tangma achieved the highest validation accuracy of 99.09% and the lowest validation loss, demonstrating faster and more stable convergence than the baselines. On CIFAR-10, Tangma reached a top validation accuracy of 78.15%, outperforming all other activation functions while maintaining a competitive training loss. Tangma also showed improved training efficiency, with lower average epoch runtimes compared to Swish and GELU. These results suggest that Tangma performs well on standard vision tasks and enables reliable, efficient training. Its learnable design gives more control over activation behavior, which may benefit larger models in tasks such as image recognition or language modeling.

Summary

  • The paper's primary contribution is Tangma, a learnable, tanh-guided activation function that leverages parameters α and γ to modulate nonlinearity and preserve gradients.
  • It demonstrates superior accuracy and convergence on MNIST and CIFAR-10, achieving 99.09% and 78.15% validation accuracies respectively, outperforming ReLU, Swish, and GELU.
  • The method offers architectural flexibility with minimal computational overhead, suggesting broad applicability to complex vision tasks and advanced deep network designs.

Tangma: A Tanh-Guided Activation with Learnable Parameters

Mathematical Characterization of the Tangma Activation

Tangma is formulated as Tangma(x)=xtanh(x+α)+γx\operatorname{Tangma}(x) = x \cdot \tanh(x + \alpha) + \gamma x, where α\alpha and γ\gamma are learnable parameters integrated into every neuron. α\alpha acts as a horizontal shift for the nonlinear regime via an inflection point adjustment, while γ\gamma incorporates a direct linear skip connection, structurally reminiscent of parameterized activation approaches in the literature (2507.10560). This dual-parameter structure ensures not only smooth, non-saturating gradients but also provides architectural flexibility to adapt to varying input statistics across tasks and layers.

In the analysis, the derivative is given by tanh(x+α)+xsech2(x+α)+γ\tanh(x+\alpha) + x\,\operatorname{sech}^2(x+\alpha) + \gamma, giving nonzero gradients across the domain as long as γ0\gamma \neq 0. For small xx, the output can be approximated as x(tanh(α)+γ)x \cdot (\tanh(\alpha) + \gamma), ensuring linearity near the origin. In asymptotic regimes (x±x\to \pm\infty), the response saturates to (γ±1)x(\gamma \pm 1)x due to the limits of tanh\tanh, guaranteeing persistent gradients and mitigating vanishing/exploding gradient pathologies. Figure 1

Figure 1: Tangma exhibits smooth activation with continuous derivatives, stabilizing gradient flow across the input domain.

The effect of each parameter is visualized: γ\gamma modulates output slope and gradient preservation, while α\alpha efficiently controls the center and threshold of nonlinear transition. Figure 2

Figure 2: Varying γ\gamma shows monotonic slope changes, corresponding to gradient preservation in both activation tails.

Figure 3

Figure 3: Modulating α\alpha horizontally shifts the nonlinearity, relocating the transition region for neuron responsiveness.

Architectural Integration and Experimental Paradigm

Tangma was benchmarked against ReLU, Swish, and GELU activations in two canonical vision tasks: MNIST digit classification and CIFAR-10 object recognition. Both tasks utilize moderate-depth CNNs structured to isolate the variance attributable to the activation layer. In all models, both α\alpha and γ\gamma are instantiated per layer as learnable tensors and optimized via backpropagation alongside model weights.

MNIST data (28×\times28 grayscale) employs a two-stage convolutional-MLP pipeline with aggressive dropout regularization, while CIFAR-10 leverages deeper convolutional stacks and normalization due to increased image complexity. Figure 4

Figure 4: The MNIST pipeline employs dual convolutional layers followed by dense projection and classification.

Figure 5

Figure 5: The CIFAR-10 architecture integrates three convolutional layers before dense categorization.

Empirical Analysis: MNIST and CIFAR-10

MNIST Results

Tangma achieves a final validation accuracy of 99.09%, outperforming ReLU (98.96%), Swish (98.91%), and GELU (98.94%). It consistently exhibits the lowest validation and training losses. Convergence is rapid and stable; Tangma achieves over 98.6% accuracy within three epochs, and both accuracy and loss curves demonstrate superior smoothness with minimal volatility throughout 10 epochs. Figure 6

Figure 6: Tangma yields the lowest loss and highest accuracy curves on MNIST, coupled with competitive computational efficiency.

While computationally, Tangma (3.45s/epoch) is slightly slower than ReLU (2.82s) but on par with Swish and GELU, the empirical trade-off indicates favorable accuracy-to-runtime characteristics, especially on higher precision tasks.

CIFAR-10 Results

On CIFAR-10, Tangma attains the top validation accuracy (78.15%) after 10 epochs, outperforming Swish (77.59%), GELU (77.99%), and ReLU (77.42%). Training loss falls to 0.2270, matching or exceeding alternatives. The validation loss plateaus slightly higher than ReLU but remains well within the competitive margin. Crucially, Tangma is more computationally efficient (8.97s/epoch) compared to Swish (11.2s) and GELU (11.3s), with only a marginal speed difference compared to ReLU (9.4s). Figure 7

Figure 7: On CIFAR-10, Tangma achieves the top accuracy and fastest convergence with the lowest computational cost among nonlinear activation methods.

Parameter Dynamics

Examination of α\alpha and γ\gamma through training reveals rapid early increases during initial epochs, then plateauing as convergence is reached. On MNIST, both parameters remain modest in value but steadily rise as more structured patterns are learned, stabilizing once the network models complete digit structure. On CIFAR-10, α\alpha and γ\gamma increase faster and to higher values due to greater data heterogeneity and feature diversity. Figure 8

Figure 8: On MNIST, γ\gamma growth supports low-intensity signal preservation; α\alpha drifts upward for more robust saturation control.

Figure 9

Figure 9: On CIFAR-10, both parameters rise rapidly, reflecting greater need for adaptivity in complex visual environments.

Theoretical and Practical Implications

The principal contribution of Tangma is the introduction of a tunable activation mechanism that directly integrates nonlinear inflection control (α\alpha) and gradient-preserving linearity (γ\gamma). Theoretically, this resolves historical limitations of piecewise activations (dying ReLUs, vanishing gradients in GELU/Swish under saturation) while endowing the model with the capacity to adapt activation regimes to data-dependent statistics. Practically, it translates into better optimization landscapes, faster convergence, increased sensitivity to subtle features, and stable gradients—critical for scalable vision architectures.

Tangma's learnable design means it can modulate the balance between feature selectivity and generalization per-layer, supporting efficient transfer to deeper or more complex models, including residual and transformer-based networks. Its computational profile is likewise favorable, yielding accuracy improvements without substantial runtime overhead.

Conclusion

Tangma, as a tanh-guided, parameterized activation, establishes empirically robust gains over canonical nonlinearities on image recognition tasks. The inclusion of per-neuron or per-layer α\alpha and γ\gamma ensures both flexibility and stability in gradient propagation, supporting efficient training and better generalization. Its demonstrated numerical and computational advantages advocate for further evaluation in deep and wide networks, large-scale vision, and sequence modeling problems. Given the generality of its formulation, future directions should include integration into architectures with layer-wise parameter sharing, transformer attention blocks, and differential learning rates for activation parameters. The evidence to date substantiates the relevance of Tangma as a strong, general-purpose activation function for advanced neural architectures.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.