Tighter McCormick Relaxations through Subgradient Propagation (1710.09188v1)
Abstract: Tight convex and concave relaxations are of high importance in the field of deterministic global optimization. We present a heuristic to tighten relaxations obtained by the McCormick technique. We use the McCormick subgradient propagation (Mitsos et al., SIAM J. Optim., 2009) to construct simple affine under- and overestimators of each factor of the original factorable function. Then, we minimize and maximize these affine relaxations in order to obtain possibly improved range bounds for every factor resulting in possibly tighter final McCormick relaxations. We discuss the heuristic and its limitations, in particular the lack of guarantee for improvement. Subsequently, we provide numerical results for benchmark cases found in the COCONUT library and case studies presented in previous works and discuss computational efficiency. We see that the presented heuristic provides a significant improvement in tightness and decrease in computational time in many cases.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.