Tightening convex relaxations of trained neural networks: a unified approach for convex and S-shaped activations (2410.23362v1)
Abstract: The non-convex nature of trained neural networks has created significant obstacles in their incorporation into optimization models. Considering the wide array of applications that this embedding has, the optimization and deep learning communities have dedicated significant efforts to the convexification of trained neural networks. Many approaches to date have considered obtaining convex relaxations for each non-linear activation in isolation, which poses limitations in the tightness of the relaxations. Anderson et al. (2020) strengthened these relaxations and provided a framework to obtain the convex hull of the graph of a piecewise linear convex activation composed with an affine function; this effectively convexifies activations such as the ReLU together with the affine transformation that precedes it. In this article, we contribute to this line of work by developing a recursive formula that yields a tight convexification for the composition of an activation with an affine function for a wide scope of activation functions, namely, convex or ``S-shaped". Our approach can be used to efficiently compute separating hyperplanes or determine that none exists in various settings, including non-polyhedral cases. We provide computational experiments to test the empirical benefits of these convex approximations.
- \bibcommenthead
- Gurobi Optimization, LLC: Gurobi Machine Learning (2024). https://www.gurobi.com/features/gurobi-machine-learning/
- Roth, K.: A primer on multi-neuron relaxation-based adversarial robustness certification. In: ICML 2021 Workshop on Adversarial Machine Learning
- Sahlodin, A.M.: Global optimization of dynamic process systems using complete search methods. PhD thesis (2013)
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.