Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 59 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 127 tok/s Pro
Kimi K2 189 tok/s Pro
GPT OSS 120B 421 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks (2312.08550v3)

Published 13 Dec 2023 in cs.LG, cs.AI, and eess.SP

Abstract: In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group. This provides a mathematical explanation for the emergence of Fourier features -- a ubiquitous phenomenon in both biological and artificial learning systems. The results hold even for non-commutative groups, in which case the Fourier transform encodes all the irreducible unitary group representations. Our findings have consequences for the problem of symmetry discovery. Specifically, we demonstrate that the algebraic structure of an unknown group can be recovered from the weights of a network that is at least approximately invariant within certain bounds. Overall, this work contributes to a foundation for an algebraic learning theory of invariant neural network representations.

Citations (9)

Summary

  • The paper establishes that invariant networks recover the Fourier transform of finite groups via linear transformation.
  • It rigorously extends algebraic learning theory to both commutative and non-commutative groups using irreducible unitary representations.
  • The findings enable symmetry discovery in network design, enhancing learning efficiency and model interpretability.

Essay: Harmonics of Learning and Invariant Networks

The paper "Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks" presents a theoretical exploration into the intersection of invariant neural networks and harmonic analysis. The central thesis is that networks invariant to finite groups will inherently recover the Fourier transform on those groups, suggesting a fundamental algebraic underpinning to observed phenomena in neural networks.

Theoretical Contributions

Under the framework of algebraic learning theory, the authors establish rigorous results that connect invariance properties of neural networks with the emergence of Fourier features. Interestingly, these findings are not restricted to commutative groups but extend to non-commutative groups where irreducible unitary representations come into play. The emergence of Fourier features—the weights of such invariant networks—serves as both a theoretical curiosity and a practical insight into symmetry detection in learning systems.

The paper presents a series of mathematical results through theorems and lemmas detailing these phenomena. The key result states that if a parametric function is invariant to the actions of a finite group, its weights cover the Fourier transform of the group, up to linear transformation. This conclusion is particularly significant for designing neural networks that leverage symmetry to enhance learning efficiency and interpretability.

Practical Implications

A salient implication of this research is its contribution to symmetry discovery. By examining the structure of weights in invariant networks, it is possible to ascertain the algebraic structure of an unknown group—a long-standing challenge in machine learning. This has practical repercussions in areas like geometric deep learning, where the discovery of underlying symmetries can lead to more efficient models that generalize better and resist overfitting.

For machine learning practitioners, this paper lays a groundwork for utilizing invariant network architectures where symmetries in data are either known or need discovery. It provides formal justifications for incorporating harmonic analysis into the learning process, thus offering a theoretical basis for the design of more robust neural architectures.

Future Directions and Speculations

Looking forward, the results of this research could catalyze further exploration in multiple avenues. Expanding the framework to encompass continuous groups or wavelet-based features could provide a more complete landscape for real-world data that is locally invariant. Additionally, exploring the implications in biological neural systems could expose parallels that deepen our understanding of learning mechanisms in the brain.

Complex-valued networks, as discussed within the paper, raise intriguing prospects given their relationship to Fourier transforms. Embracing this complexity could shift standard practices in network architecture, prompting a reassessment of how non-trivial mathematical properties are leveraged in artificial intelligence.

Conclusion

This paper develops a concrete mathematical linkage between group invariance and the emergence of harmonic features in learning systems. By drawing parallels from algebraic structures, the authors argue convincingly for the universality of Fourier features, offering a profound insight into the nature of symmetry and structure in both artificial and biological networks. The implications are notable, suggesting pathways toward more principled methods for harnessing data symmetries in machine learning models, with potential expansions challenging current notions of network design and interpretation.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 11 posts and received 1974 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube