Characterize the dependency structure among linear pieces of ReLU network outputs

Determine the specific structure of the dependencies among the linear pieces of the functions computed by feedforward ReLU neural networks across activation regions in input space, by deriving explicit relations that constrain outputs across regions due to shared parameters in the network.

Background

The paper studies how outputs of ReLU networks vary across activation regions in input and parameter space. Although ReLU networks can produce functions with many linear regions, those regions are coupled because they share parameters, creating nontrivial dependencies between the pieces.

Understanding these dependencies has direct implications for generalization and for implicit descriptions of the function spaces representable by ReLU networks. The authors develop algebraic tools (varieties and determinantal constraints) to begin describing such relations.

References

Despite the intuitive appeal, characterizing the specific structure of the dependencies has remained an open problem.

Constraining the outputs of ReLU neural networks (2508.03867 - Alexandr et al., 5 Aug 2025) in Introduction (Section 1)