Journal of Global Optimization | 2019

Tighter McCormick relaxations through subgradient propagation

 
 

Abstract


Tight convex and concave relaxations are of high importance in deterministic global optimization. We present a method to tighten relaxations obtained by the McCormick technique. We use the McCormick subgradient propagation (Mitsos et al. in SIAM J Optim 20(2):573–601, 2009) to construct simple affine under- and overestimators of each factor of the original factorable function. Then, we minimize and maximize these affine relaxations in order to obtain possibly improved range bounds for every factor resulting in possibly tighter final McCormick relaxations. We discuss the method and its limitations, in particular the lack of guarantee for improvement. Subsequently, we provide numerical results for benchmark cases found in the MINLPLib2 library and case studies presented in previous works, where the McCormick technique appears to be advantageous, and discuss computational efficiency. We see that the presented algorithm provides a significant improvement in tightness and decrease in computational time, especially in the case studies using the reduced space formulation presented in (Bongartz and Mitsos in J Glob Optim 69:761–796, 2017).

Volume None
Pages 1-29
DOI 10.1007/s10898-019-00791-0
Language English
Journal Journal of Global Optimization

Full Text