Dice Question Streamline Icon: https://streamlinehq.com

Expressivity of the Linearizer architecture

Characterize the precise expressivity of the Linearizer architecture f(x) = g_y^{-1}(A g_x(x)), where g_x and g_y are invertible neural networks and A is a linear operator; specifically, determine the exact class of input–output mappings that are representable by such Linearizers under the induced vector space operations, and provide necessary and sufficient conditions that delineate these representable mappings.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper introduces the Linearizer framework, which represents a mapping f(x) as g_y{-1}(A g_x(x)) with invertible neural networks g_x and g_y that induce vector space operations under which f becomes linear. This construction enables direct use of linear algebra (e.g., SVD, pseudoinverse) on nonlinear mappings.

While the authors demonstrate applications and analyze some structural constraints (e.g., implications for null spaces and spectral properties), they acknowledge that a full theoretical characterization of what functions can be represented by Linearizers is not yet established. Understanding this expressivity is crucial for knowing the capabilities and limitations of Linearizers across tasks and domains.

References

Finally, the precise expressivity of the Linearizer remains an open theoretical question.

Who Said Neural Networks Aren't Linear? (2510.08570 - Berman et al., 9 Oct 2025) in Section 5 (Limitations)