Dice Question Streamline Icon: https://streamlinehq.com

Conjecture on the effective search space of linear regions in ReLU networks

Determine whether, during optimization of a linear objective over a bounded input domain for a trained feedforward neural network with ReLU activations, the effective search space consisting of encountered linear regions is in fact smaller and simpler than worst-case expectations suggest, thereby explaining observed scalability of lightweight LP-based local search methods.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper studies optimization over trained feedforward ReLU neural networks by leveraging their piecewise-linear structure. Although worst-case analyses show that the number of linear regions can grow rapidly with network size, later results indicate architectural tradeoffs and empirical evidence suggests that typical networks realize far fewer regions.

Building on findings that gradients change little across adjacent linear regions, the authors hypothesize that practical instances present a reduced and more regular search landscape. This motivates their Relax-and-Walk (RW) algorithm, which navigates linear regions via LP relaxations and local search. Establishing the conjectured structural property would provide theoretical justification for the empirical scalability of such LP-based methods.

References

Hence, we may conjecture that the search space is actually smaller and simpler than expected, and thus that a leaner algorithm may produce good results faster.

Optimization Over Trained Neural Networks: Taking a Relaxing Walk (2401.03451 - Tong et al., 7 Jan 2024) in Section 1, Introduction