Dice Question Streamline Icon: https://streamlinehq.com

Strict positive definiteness of the Fisher information for general deep networks

Establish explicit sufficient conditions under which the population Fisher information matrix I(θ0) = E[∇θ fθ0(X) ∇θ fθ0(X)ᵀ] of multilayer (multi-hidden-layer) feedforward neural networks is strictly positive definite, extending the irreducibility-based criterion known for single-hidden-layer networks to general deep architectures.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper contrasts identifiability properties of classical parametric models with those of neural networks. Local identifiability is linked to invertibility (strict positive definiteness) of the population Fisher information matrix I(θ0) = E[∇θ fθ0(X) ∇θ fθ0(X)ᵀ]. For one-hidden-layer networks, Fukumizu (1996) provided a sufficient condition (based on irreducibility) ensuring strict positive definiteness of the Fisher information for any positive continuous input density.

In the Discussion, the authors note that while such a condition exists for single-hidden-layer networks, the corresponding question for general deep networks with multiple hidden layers remains unresolved, motivating the need to determine conditions guaranteeing invertibility of the population Fisher information in deep architectures.

References

A 1995 result of Fukumizu gives a sufficient condition for the Fisher information matrix (for any positive continuous density) of a one hidden layer neural network to be strictly positive definite, in terms of `irreducibility' of the network. To our knowledge, this question does not seem to be known for general deep networks, but there has been much related work.

Non-identifiability distinguishes Neural Networks among Parametric Models (2504.18017 - Chatterjee et al., 25 Apr 2025) in Discussion (Section 4)