Mehiddin Al-Baali
Sultan Qaboos University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mehiddin Al-Baali.
Journal of Optimization Theory and Applications | 1986
Mehiddin Al-Baali; Roger Fletcher
The line search subproblem in unconstrained optimization is concerned with finding an acceptable steplength which satisfies certain standard conditions. Prototype algorithms are described which guarantee finding such a step in a finite number of operations. This is achieved by first bracketing an interval of acceptable values and then reducing this bracket uniformly by the repeated use of sectioning in a systematic way. Some new theorems about convergence and termination of the line search are presented.Use of these algorithms to solve the line search subproblem in methods for nonlinear least squares is considered. We show that substantial gains in efficiency can be made by making polynomial interpolations to the individual residual functions rather than the overall objective function. We also study modified schemes in which the Jacobian matrix is evaluated as infrequently as possible, and show that further worthwhile savings can be made. Numerical results are presented.
Journal of Optimization Theory and Applications | 1998
Mehiddin Al-Baali
Self-scaling quasi-Newton methods for unconstrained optimization depend upon updating the Hessian approximation by a formula which depends on two parameters (say, τ and θ) such that τ = 1, θ = 0, and θ = 1 yield the unscaled Broyden family, the BFGS update, and the DFP update, respectively. In previous work, conditions were obtained on these parameters that imply global and superlinear convergence for self-scaling methods on convex objective functions. This paper discusses the practical performance of several new algorithms designed to satisfy these conditions.
Numerical Algorithms | 1999
Mehiddin Al-Baali
This paper considers simple modifications of the limited memory BFGS (L-BFGS) method for large scale optimization. It describes algorithms in which alternating ways of re-using a given set of stored difference vectors are outlined. The proposed algorithms resemble the L-BFGS method, except that the initial Hessian approximation is defined implicitly like the L-BFGS Hessian in terms of some stored vectors rather than the usual choice of a multiple of the unit matrix. Numerical experiments show that the new algorithms yield desirable improvement over the L-BFGS method.
Journal of Optimization Theory and Applications | 1993
Mehiddin Al-Baali
In this paper, we propose new members of the Broyden family of quasi-Newton methods. We develop, on the basis of well-known least-change results for the BFGS and DFP updates, a measure for the Broyden family which seeks to take into account the change in both the Hessian approximation and its inverse. The proposal is then to choose the formula which gives the least value of this measure in terms of the two parameters available, and hence to produce an update which is optimal in the sense of the given measure. Several approaches to the problem of minimizing the measure are considered, from which new updates are obtained. In particular, one approach yields a new variational result for the Davidon optimally conditioned method and another yields a reasonable modification to this method. The paper is also concerned with the possibility of estimating, in a certain sense, the size of the eigenvalues of the Hessian approximation on the basis of two available scalars. This allows one to derive further modifications to the above-mentioned methods. Comparisons with the BFGS and Davidson methods are made on a set of standard test problems that show promising results for certain new methods.
Computational Optimization and Applications | 1998
Mehiddin Al-Baali
This paper studies the convergence properties of algorithms belonging to the class of self-scaling (SS) quasi-Newton methods for unconstrained optimization. This class depends on two parameters, say θk and τk, for which the choice τk=1 gives the Broyden family of unscaled methods, where θk=1 corresponds to the well known DFP method. We propose simple conditions on these parameters that give rise to global convergence with inexact line searches, for convex objective functions. The q-superlinear convergence is achieved if further restrictions on the scaling parameter are introduced. These convergence results are an extension of the known results for the unscaled methods. Because the scaling parameter is heavily restricted, we consider a subclass of SS methods which satisfies the required conditions. Although convergence for the unscaled methods with θk ≥ 1 is still an open question, we show that the global and superlinear convergence for SS methods is possible and present, in particular, a new SS-DFP method.
Optimization Methods & Software | 2014
Mehiddin Al-Baali; Emilio Spedicato; Francesca Maggioni
Quasi-Newton methods were introduced by Charles Broyden [A class of methods for solving nonlinear simultaneous equations, Math Comp. 19 (1965), pp. 577–593] as an alternative to Newtons method for solving nonlinear algebraic systems; in 1970 Broyden [The convergence of a class of double rank minimization algorithms, IMA J Appl Math. 6, part I and II (1970), pp. 76–90, 222–231] extended them to nonlinear unconstrained optimization as a generalization of the DFP method which is proposed by Davidon [Variable metric method for minimization (revised), Technical Report ANL-5990, Argonne National Laboratory, USA, 1959] and investigated by Fletcher and Powell [A rapidly convergent descent method for minimization, Comput J. 6 (1963), pp. 163–168]. Such methods (in particular, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method) are very useful in practice and have been subject to substantial theoretical analysis, albeit some problems are still open. In this paper we describe properties of these methods as derived by Broyden and then further developed by other researchers, especially with reference to improvement of their computational performance.
Computational Optimization and Applications | 2015
Mehiddin Al-Baali; Yasushi Narushima; Hiroshi Yabe
Recently, conjugate gradient methods, which usually generate descent search directions, are useful for large-scale optimization. Narushima et al. (SIAM J Optim 21:212–230, 2011) have proposed a three-term conjugate gradient method which satisfies a sufficient descent condition. We extend this method to two parameters family of three-term conjugate gradient methods which can be used to control the magnitude of the directional derivative. We show that these methods converge globally and work well for suitable choices of the parameters. Numerical results are also presented.
Computational Optimization and Applications | 2012
Mehiddin Al-Baali; Humaid Khalfan
Techniques for obtaining safely positive definite Hessian approximations with self-scaling and modified quasi-Newton updates are combined to obtain ‘better’ curvature approximations in line search methods for unconstrained optimization. It is shown that this class of methods, like the BFGS method, has the global and superlinear convergence for convex functions. Numerical experiments with this class, using the well-known quasi-Newton BFGS, DFP and a modified SR1 updates, are presented to illustrate some advantages of the new techniques. These experiments show that the performance of several combined methods are substantially better than that of the standard BFGS method. Similar improvements are also obtained if the simple sufficient function reduction condition on the steplength is used instead of the strong Wolfe conditions.
Archive | 2003
Mehiddin Al-Baali
Low storage quasi-Newton algorithms for large-scale nonlinear least-squares problems are considered with “better” modified Hessian approximations defined implicitly in terms of a set of vector pairs. The modification technique replaces one vector of each pair, namely the difference in the gradients of the objective function, by a superior choice in various ways. These vectors introduce information about the true Hessian of this function by exploiting information about the Jacobian matrix of the residual vector of the problem. The proposed technique is also based on a new safeguarded scheme for enforcing the positive definiteness of Hessian approximations. It is shown, in particular, that this technique enhances the quality of the limited memory (L-)BFGS Hessian, maintains the simplicity formulation of the L-BFGS algorithm and improves its performance substantially.
Optimization Methods & Software | 1994
Mehiddin Al-Baali
In this paper, we extend the switching BFGS/DFP algorithm of Fletcher and the switching BFGS/SR1 algorithm of Al-Baali to a class of switching type algorithms proposed within the Broyden family of quasi-Newton methods for unconstrained optimization. We propose some members of this class, which switch among the BFGS, the SRI and other desirable methods from the preconvex class. The switching technique is made in a certain sense upon the basis of estimating the size of the eigenvalues of the Hessian approximation. The results on a set of standard test problems show that several switching methods improve over the BFGS method and work almost similarly to certain idealized methods.