Appl. Math. Comput. | 2019

A reduced-space line-search method for unconstrained optimization via random descent directions

 
 
 
 

Abstract


Abstract In this paper, we propose an iterative method based on reduced-space approximations for unconstrained optimization problems. The method works as follows: among iterations, samples are taken about the current solution by using, for instance, a Normal distribution; for all samples, gradients are computed (approximated) in order to build reduced-spaces onto which descent directions of cost functions are estimated. By using such directions, intermediate solutions are updated. The overall process is repeated until some stopping criterion is satisfied. The convergence of the proposed method is theoretically proven by using classic assumptions in the line search context. Experimental tests are performed by using well-known benchmark optimization problems and a non-linear data assimilation problem. The results reveal that, as the number of sample points increase, gradient norms go faster towards zero and even more, in the data assimilation context, error norms are decreased by several order of magnitudes with regard to prior errors when the assimilation step is performed by means of the proposed formulation.

Volume 341
Pages 15-30
DOI 10.1016/j.amc.2018.08.020
Language English
Journal Appl. Math. Comput.

Full Text