Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Periodic Activation Functions Induce Stationarity (2110.13572v2)

Published 26 Oct 2021 in cs.LG and stat.ML

Abstract: Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.

Citations (19)

Summary

  • The paper proposes that using periodic activation functions induces stationarity in Bayesian Neural Networks (BNNs), linking them to desirable Gaussian Process priors for improved out-of-distribution robustness.
  • Theoretical contributions include demonstrating that different periodic activation functions can correspond network spectral density to Matern covariance structures and showing a Student-t weight prior can map to Matern covariance in function space.
  • Empirical results show periodic activation functions maintain in-domain performance while significantly reducing overfitting and retaining uncertainty on out-of-distribution tasks like rotated MNIST and OOD image classification, suggesting practical implications for robust AI systems.

Analysis of Periodic Activation Functions Inducing Stationarity in Bayesian Neural Networks

This paper explores a novel approach to embedding inductive biases in Bayesian Neural Networks (BNNs) by introducing periodic activation functions. The authors establish that such functions facilitate a connection between the network's prior weights and stationary Gaussian process (GP) priors, characterized by translation-invariance and mean reversion outside the observed data. This trait is pivotal for enhancing the model's robustness against out-of-distribution (OOD) samples, preventing overconfident predictions common in non-stationary neural models.

Theoretical Contributions

  1. Inductive Bias and Stationarity: The authors propose that periodic activation functions engender a stationary behavior in BNNs, mimicking the properties of stationary GPs with translation-invariance characteristics. This property ensures that the model outputs remain stable across different input translations, a critical factor in mitigating sensitivity to unobserved data perturbations.
  2. Spectral Density Correspondence: Through harmonic analysis, the paper demonstrates that diverse periodic activation functions can directly associate the spectral density of a network's functional space with a Matern covariance structure. The activation functions considered go beyond the traditional Fourier basis to include triangular wave and periodic ReLU functions.
  3. Prior Distribution Mapping: A noteworthy contribution is the derivation showing how a Student-t distribution on network weights corresponds to a Matern covariance in the function space. This mapping emphasizes a broader class of kernel approximations within BNNs, breaking new ground in leveraging known GP connections.

Empirical Validation

Empirical evaluations verify that periodic activation functions ensure comparability in in-domain performances while significantly curtailing overfitting in OOD scenarios. Experiments covered regression and classification tasks across UCI datasets, the rotated MNIST, and an OOD image classification scenario using CIFAR-10 and SVHN datasets. These tasks underline the efficacy of globally stationary models in retaining uncertainty— a desirable trait for real-world deployments.

Practical and Theoretical Implications

The induction of stationarity through periodic activation functions suggests potential advancements in reducing neural networks' brittleness to data variations. The findings open paths for more robust AI systems, with immediate implications for domains reliant on high prediction reliability, such as medicine or autonomous systems. Furthermore, leveraging periodic functions could serve as a bridge towards reconciling deterministic approaches with the probabilistic domain's demand for explicability and uncertainty management.

Future Developments

The extension of periodic activation functions in deep architectures paves the way for more complex structured priors, contributing to deep learning models that maintain robustness and accuracy in diverse applications. Future works could explore the dynamic interplay between these periodic functions and advanced neural architectures, such as transformers or convolutional layers, to expand the utility of this approach further.

Conclusion

This paper presents a compelling advancement in embedding stationarity within neural networks via periodic activation functions. The explicit linkage of neural mechanisms to spectral properties of GPs expands the theoretical and practical landscape of Bayesian neural networks. As such, this research contributes to a vital ongoing discourse in machine learning about developing models that can wisely navigate the uncertain terrains of real-world applications. This investigation is a notable stride toward producing AI systems that both 'know what they do not know' and exhibit caution in unfamiliar environments.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com