Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Identifiability of Deep Polynomial Neural Networks (2506.17093v1)

Published 20 Jun 2025 in cs.LG, cs.AI, math.AG, and stat.ML

Abstract: Polynomial Neural Networks (PNNs) possess a rich algebraic and geometric structure. However, their identifiability -- a key property for ensuring interpretability -- remains poorly understood. In this work, we present a comprehensive analysis of the identifiability of deep PNNs, including architectures with and without bias terms. Our results reveal an intricate interplay between activation degrees and layer widths in achieving identifiability. As special cases, we show that architectures with non-increasing layer widths are generically identifiable under mild conditions, while encoder-decoder networks are identifiable when the decoder widths do not grow too rapidly. Our proofs are constructive and center on a connection between deep PNNs and low-rank tensor decompositions, and Kruskal-type uniqueness theorems. This yields both generic conditions determined by the architecture, and effective conditions that depend on the network's parameters. We also settle an open conjecture on the expected dimension of PNN's neurovarieties, and provide new bounds on the activation degrees required for it to reach its maximum.

Summary

  • The paper establishes a localization theorem showing deep polynomial neural network identifiability is equivalent to that of their 2-layer subnetworks.
  • Identifiability conditions are derived explicitly based on layer widths and activation degrees, leveraging connections to partially symmetric tensor decomposition theory.
  • These findings offer principled guidance for designing interpretable PNNs and suggest tensor-based algorithms for parameter recovery under verifiable conditions.

Identifiability of Deep Polynomial Neural Networks: An Expert Overview

This paper presents a comprehensive theoretical analysis of the identifiability of deep polynomial neural networks (PNNs), focusing on both architectures with and without bias terms. The authors establish a rigorous connection between the algebraic structure of PNNs, tensor decomposition theory, and the conditions under which the parameters of such networks can be uniquely recovered (up to inherent symmetries). The results have significant implications for interpretability, model selection, and the theoretical understanding of neural network expressivity.

Theoretical Contributions

The central result is a localization theorem: a deep PNN is (globally) identifiable if and only if every 2-layer subnetwork formed by any pair of consecutive layers is itself identifiable. This equivalence is nontrivial and provides a reduction of the identifiability problem for deep architectures to the well-studied case of shallow (2-layer) networks. The proof leverages the equivalence between PNNs and partially symmetric tensor decompositions, specifically using Kruskal-type uniqueness theorems.

Key technical contributions include:

  • Generic Identifiability Conditions: The paper provides explicit, constructive conditions on layer widths and activation degrees that guarantee identifiability. For instance, pyramidal architectures (with non-increasing layer widths) are generically identifiable for quadratic or higher activation degrees. Encoder-decoder (bottleneck) architectures are identifiable provided the decoder widths do not increase too rapidly.
  • Linear Activation Thresholds: The minimal activation degree required for identifiability is shown to be linear in the layer widths, improving upon previous quadratic bounds. This settles and generalizes prior conjectures regarding the expressivity and identifiability of PNNs.
  • Bias Terms via Homogenization: The identifiability analysis is extended to PNNs with bias terms using a homogenization procedure, mapping the problem to an equivalent homogeneous PNN with augmented input dimension.
  • Neurovariety Dimension: The results settle an open conjecture on the expected dimension of the neurovariety associated with a PNN, providing new bounds on the activation degrees required for the neurovariety to reach its maximal dimension.

Numerical and Structural Implications

The paper's claims are supported by constructive proofs, allowing for the explicit verification of identifiability for a given set of network parameters. The conditions are not only generic but also effective: for a specific PNN, one can check the Kruskal ranks of the relevant matrices to certify uniqueness. This is particularly relevant for practitioners interested in model interpretability and parameter recovery.

Strong numerical results include:

  • For pyramidal architectures, identifiability holds for all activation degrees r2r_\ell \geq 2.
  • For general architectures, identifiability is guaranteed if r2d2r_\ell \geq 2 d_\ell - 2 for each layer \ell, where dd_\ell is the width of the \ell-th layer.
  • Encoder-decoder networks are identifiable if the decoder widths satisfy d2d12d_{\ell} \leq 2 d_{\ell-1} - 2 for each layer after the bottleneck.

Contradictory and Bold Claims

The paper makes the bold claim that identifiability of deep PNNs is completely characterized by the identifiability of their 2-layer subnetworks, a property not previously established for other classes of deep neural networks. This result is in contrast to prior work, which only established local identifiability or required much stronger (often impractical) conditions on activation degrees and layer widths.

Practical and Theoretical Implications

Practical implications:

  • Interpretability and Model Selection: The results provide a principled basis for designing PNN architectures that are interpretable, i.e., whose parameters can be uniquely recovered from input-output data (up to scaling and permutation symmetries).
  • Parameter Recovery Algorithms: The constructive nature of the proofs suggests that tensor decomposition algorithms can be directly applied to recover PNN parameters, with identifiability guarantees under the stated conditions.
  • Network Compression and Pruning: The minimality results imply that overparameterized PNNs can be pruned to unique, irreducible representations, aiding in model compression and architecture search.

Theoretical implications:

  • Expressivity and Neurovarieties: The link between identifiability and the dimension of neurovarieties provides a new perspective on the expressivity of PNNs, connecting algebraic geometry with neural network theory.
  • Extension to Other Architectures: The localization principle may inspire similar results for other classes of neural networks, such as those with non-polynomial activations or more complex connectivity patterns.
  • Tensor Methods in Deep Learning: The work reinforces the utility of tensor decomposition theory in understanding and analyzing deep learning models, particularly in the context of parameter identifiability and uniqueness.

Future Directions

Potential avenues for further research include:

  • Algorithmic Development: Designing efficient algorithms for parameter recovery in deep PNNs, leveraging the established identifiability conditions and tensor decomposition techniques.
  • Extension to Non-Polynomial Activations: Investigating whether similar localization results hold for networks with other activation functions, such as ReLU or sigmoidal activations.
  • Empirical Validation: Applying the theoretical results to real-world datasets and tasks, particularly in domains where interpretability and parameter recovery are critical (e.g., scientific machine learning, causal inference).
  • Generalization to Other Network Topologies: Exploring identifiability in architectures with skip connections, convolutional layers, or attention mechanisms, using the tensor-based framework developed in this work.

Summary Table: Identifiability Conditions

Architecture Type Layer Widths Condition Activation Degree Condition Identifiability Guarantee
Pyramidal (non-increasing) d0d1dLd_0 \geq d_1 \geq \cdots \geq d_L r2r_\ell \geq 2 Generic identifiability
Encoder-Decoder (bottleneck) d2d12d_{\ell} \leq 2 d_{\ell-1} - 2 after bottleneck r2r_\ell \geq 2 Generic identifiability
General Any r2d2r_\ell \geq 2 d_\ell - 2 Generic identifiability

Conclusion

This work provides a rigorous and constructive characterization of the identifiability of deep polynomial neural networks, reducing the problem to the analysis of 2-layer subnetworks and establishing explicit, verifiable conditions on network architecture and activation degrees. The results have direct implications for the design, analysis, and interpretability of PNNs, and open new directions for the application of algebraic and tensor methods in deep learning theory.