A Sequential Descent Method for Global Optimization
AA Sequential Descent Method for Global Optimization
Mohamed Tifroute , Anouar Lahmdani and Hassane Bouzahir E-mails: [email protected], [email protected] , and Higher School of Technology - Guelmim , Ibn Zohr University, Morocco FSA - Ait Melloul, Ibn Zohr University, Morocco ENSA, Ibn Zohr University, Morocco a r X i v : . [ m a t h . O C ] N ov bstract In this paper, a sequential search method for finding the global minimum of an ob-jective function is presented, The descent gradient search is repeated until the globalminimum is obtained. The global minimum is located by a process of finding pro-gressively better local minima. We determine the set of points of intersection betweenthe curve of the function and the horizontal plane which contains the local minimapreviously found. Then, a point in this set with the greatest descent slope is chosento be a initial point for a new descent gradient search. The method has the descentproperty and the convergence is monotonic. To demonstrate the effectiveness of theproposed sequential descent method, several non-convex multidimensional optimiza-tion problems are solved. Numerical examples show that the global minimum can besought by the proposed method of sequential descent.
Key words:
Descent method, Global minimum, Sequential method.
Multi-dimensional non-convex continuous optimization problems are important in manypractical applications. Many approaches which are supported by relevant convergenceanalysis are for finding local minima only (Rubinov [1]). However, many local minima areuseless in practice, as their corresponding cost values are too much inferior to the globalminimum cost value. There are several proposed stochastic optimization methods to solvefor the global minimum. Global optimization is a NP complete problem and heuristic ap-proaches like Genetic Algorithms (GAs) and Simulated Annealing (SA) have been usedhistorically to find near optimal solutions [1, 2]. The Particle Swarm Optimization (PSO)algorithm proposed by Kennedy and Eberhart in 1995 [3, 5] is a metaheuristic algorithmbased on the concept of swarm intelligence capable of solving complex mathematics prob-lems existing in engineering. It has been shown that the PSO algorithm is easy to im-plement and it converges faster than traditional techniques like GAs for a wide variety ofbenchmark optimization problems [6, 7]. Heuristics method are very expensive to apply.Therefore, methods which are hybriding different type of algorithms are becoming morepopular. One method is to use gradient-type procedures coupled with certain auxiliaryfunctions to move successively from one local minimum to another better one. This in-cludes the tunnelling method. For more details, see,Levy an Montalvo [10], Yao [16] andLiu and Teo, [12]. Yiu et al. [14] propose a hybrid algorithm in which SA and the DescentMethod (DM) are used. SA is used in the global search phase, and the DM is used in the2ocal search phase. In [15], metaheuristic methods exploiting a chaotic system based onthe DM have been presented.In general, an efficient global optimization algorithm would have the capacity of over-coming the problem of local minima, and assuring the speed of convergence to approachstationary points. On the part of continuous decision variables, the stochastic optimizationapproach gives a good methodology to get away from stationary points, but it is computa-tionally intensive to be practicable. Among the argumentations for this problem, the factthat the method is very slow when it strives to nearing or descents to stationary points.However, an analytic approach based on the gradient information is much more efficientin finding a stationary point. In this paper, a sequential technique is proposed consistingin repeating the classical method of the descent gradient until the global minimum is ob-tained. The global minimum is located by a process of finding progressively better localminima. We determine the set of points of intersection between the curve of the functionand the horizontal plane which contains the previously found local minima . Accordingly,we propose to repeat this process and choose a descent point from the intersection betweenthe horizontal plan containing the previously converged local minima and the curve of thefunction. The probability of finding a better descent point is much larger then finding abetter local minimum. Thus, it is much more efficient computationally. The advantage ofthe proposed sequential descent method is that the convergence is monotonic. The decreasein the objective function after executing each descent gradient search might be very small.But, it is sufficient to detour previously converged local solutions. To demonstrate theeffectiveness of the proposed sequential method, several multidimensional non-convex op-timization problems are solved. For each example, the proposed sequential descent methodlocates the corresponding global solution.
Deterministic optimization algorithms are generally based on the gradient of the objec-tive function with respect to design variables. The deterministic method, when applied tonon-linear non-convex minimization problems, usually requires implementing an iterativemethod, which, will optimistically converge to a local minimum of the objective function,after a certain number of iterations. The iterative method can be written as follows. x k +1 = x k + λ k d k (1)where x ∈ X ⊂ R p is the vector of design variables, λ is the search step size, d is thedirection of descent, k is the number of iterations and X is the search space. An iteration3tep is acceptable if f ( x k +1 ) < f ( x k ) . The direction of descent d will generate an accept-able step if and only if there exists a positive definite matrix M , such that d = − M ∇ f ,where ∇ f is the gradient of f . Such requirement results in directions of descent that forman angle greater than with the gradient direction. A minimization method in which thedirections are obtained as above is called an acceptable gradient method. Steepest Descent Method
For the steepest descent method, the direction of descent isgiven by d k = −∇ f ( x k ) (2) Line Search
For a descent direction d k , the line search procedure determines the solutionof the following one-dimensional optimization problem λ k = argmin λ f ( x k + λd k ) (3)and takes λ k as a step size. In fact, there are several approaches that define conditions for astep size that could be obtained as an approximate solution of 4. One of them is the Armijorule, f ( x k + λ k d k ) ≤ f ( x k ) + c λ k ∇ f ( x k ) T d k (4)where c is a small positive constant. While the line search procedure works well withalgorithms of the first order (i.e., algorithms using function and gradient values only) theconvergence rate is at most linear for deterministic rate. Definition 2.1.
Let a vector v ∈ R n be given by v = ( v , v , · · · , v n ) , v is said to be strictlynegative if all component v , v , · · · , v n are strictly negative. Algorithm 1
SGD (Sequential Gradient Descent) Generate x (0) randomly and evaluate f ( x (0) ) . Set k = 0 ; Solve for the local minimum of f ( x ) using a gradient-based minimization method with x ( k ) as the initial guess to give x ( k ) ∗ such that | f ( x ( k ) ∗ ) − f ( x ( k ) ) | ≤ (cid:15) k , where (cid:15) k is apositive parameter; Define ( P k ) the horizontal plan containing f ( x ( k ) ∗ ) ; Find the set I k = ( P k ) (cid:84) ( Cf ) ; if for all g k ∈ I k , ∇ f ( g k ) = 0 , return x ( k ) ∗ ; else ; Put : I − k = { g k ∈ I k /g k < } , choose x ( k +1) = argmax g k ∈ I − k {(cid:107)∇ f ( g k ) (cid:107)} ;let k := k + 1 go to Step 2. 4igure 1: First local optimal solution x , ∗ Example 1:
Consider the following problem:Minimize x sin ( x ) − x cos ( x ) − ≤ x j ≤ , j = 1 , . Step 1 : Initial start point ( − , , local optimal solution is ( − . , . and f ( x , ∗ ) = − . (see figures 1 and 2). Step 2 : Best new start point is : (10 , . , the new local optimal solution is (11 . , . and f ( x , ∗ ) = − . (see figure 3 ).The intersection between the curve of the function and the horizontal plane which contains f ( x , ∗ ) is reduced to a single point (see figure 4), and as the gradient of the function atthis point is close to zero, we deduce that the solution x , ∗ is the global minimum of thefunction. 5igure 2: Intersection between the curve of the function and the horizontal plan containing f ( x , ∗ ) Figure 3: Second local optimal solution x , ∗ f ( x , ∗ ) Example 2:
The two-dimensional Shubert function (Shubert, 1972 ) f ( x , x ) = (cid:0)(cid:80) i =1 i cos [( i + 1) x + i ] (cid:1) (cid:0)(cid:80) i =1 i cos [( i + 1) x + i ] (cid:1) + (cid:0) ( x + 1 . + ( x + 0 . (cid:1) , − (cid:54) x i (cid:54) , i = 1 , Step 1 : Initial start point (7 , , local optimal solution is (7 . , . and f ( x , ∗ ) = − . (see figures 5 and 6). Step 2 : Best new start point is ( − . , . , the new local optimal solu-tion is ( − . , . and f ( x , ∗ ) = − . (see figure 7).The intersection between the curve of the function and the horizontal plane which con-tains f ( x , ∗ ) is reduced to a finite number of points, and as the gradient of the function atthis points is close to zero, we deduce that the solution x , ∗ is the global minimum of thefunction (see figure 8).In order to verify the quality of our algorithm, we compared our result to those of Yiuet al. [14] which obtains global minimum after 19 local search when our algorithm obtainsthe same result in only 2 sequential local search.7igure 5: First local optimal solution x , ∗ Figure 6: Intersection between the curve of the function and the horizontal plan containing f ( x , ∗ ) x , ∗ Figure 8: Intersection between the curve of the function and the horizontal plan containing f ( x , ∗ ) onclusion: In this paper, we have proposed a new sequential search method for finding the globalminimum of an objective function. Numerical results show that the proposed sequentialmethod is promising for solving multidimensional non-convex continuous optimizationproblems. 10 eferenceseferences