Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wah June Leong is active.

Publication


Featured researches published by Wah June Leong.


Computers & Mathematics With Applications | 2010

A new two-step gradient-type method for large-scale unconstrained optimization

Mahboubeh Farid; Wah June Leong; Malik Abu Hassan

In this paper, we propose some improvements on a new gradient-type method for solving large-scale unconstrained optimization problems, in which we use data from two previous steps to revise the current approximate Hessian. The new method which we considered, resembles to that of Barzilai and Borwein (BB) method. The innovation features of this approach consist in using approximation of the Hessian in diagonal matrix form based on the modified weak secant equation rather than the multiple of the identity matrix in the BB method. Using this approach, we can obtain a higher order accuracy of Hessian approximation when compares to other existing BB-type method. By incorporating a simple monotone strategy, the global convergence of the new method is achieved. Practical insights into the effectiveness of the proposed method are given by numerical comparison with the BB method and its variant.


Computational Optimization and Applications | 2009

A restarting approach for the symmetric rank one update for unconstrained optimization

Wah June Leong; Malik Abu Hassan

Abstract Two basic disadvantages of the symmetric rank one (SR1) update are that the SR1 update may not preserve positive definiteness when starting with a positive definite approximation and the SR1 update can be undefined. A simple remedy to these problems is to restart the update with the initial approximation, mostly the identity matrix, whenever these difficulties arise. However, numerical experience shows that restart with the identity matrix is not a good choice. Instead of using the identity matrix we used a positive multiple of the identity matrix. The used positive scaling factor is the optimal solution of the measure defined by the problem—maximize the determinant of the update subject to a bound of one on the largest eigenvalue. This measure is motivated by considering the volume of the symmetric difference of the two ellipsoids, which arise from the current and updated quadratic models in quasi-Newton methods. A replacement in the form of a positive multiple of the identity matrix is provided for the SR1 update when it is not positive definite or undefined. Our experiments indicate that with such simple initial scaling the possibility of an undefined update or the loss of positive definiteness for the SR1 method is avoided on all iterations.


Journal of Computational and Applied Mathematics | 2011

Improved Hessian approximation with modified secant equations for symmetric rank-one method

Farzin Modarres; Abu Hassan Malik; Wah June Leong

Symmetric rank-one (SR1) is one of the competitive formulas among the quasi-Newton (QN) methods. In this paper, we propose some modified SR1 updates based on the modified secant equations, which use both gradient and function information. Furthermore, to avoid the loss of positive definiteness and zero denominators of the new SR1 updates, we apply a restart procedure to this update. Three new algorithms are given to improve the Hessian approximation with modified secant equations for the SR1 method. Numerical results show that the proposed algorithms are very encouraging and the advantage of the proposed algorithms over the standard SR1 and BFGS updates is clearly observed.


Computers & Mathematics With Applications | 2011

A matrix-free quasi-Newton method for solving large-scale nonlinear systems

Wah June Leong; Malik Abu Hassan; Muhammad Yusuf

One of the widely used methods for solving a nonlinear system of equations is the quasi-Newton method. The basic idea underlining this type of method is to approximate the solution of Newtons equation by means of approximating the Jacobian matrix via quasi-Newton update. Application of quasi-Newton methods for large scale problems requires, in principle, vast computational resource to form and store an approximation to the Jacobian matrix of the underlying problem. Hence, this paper proposes an approximation for Newton-step based on the update of approximation requiring a computational effort similar to that of matrix-free settings. It is made possible by approximating the Jacobian into a diagonal matrix using the least-change secant updating strategy, commonly employed in the development of quasi-Newton methods. Under suitable assumptions, local convergence of the proposed method is proved for nonsingular systems. Numerical experiments on popular test problems confirm the effectiveness of the approach in comparison with Newtons, Chord Newtons and Broydens methods.


Journal of Computational and Applied Mathematics | 2011

New quasi-Newton methods via higher order tensor models

Fahimeh Biglari; Malik Abu Hassan; Wah June Leong

Many researches attempt to improve the efficiency of the usual quasi-Newton (QN) methods by accelerating the performance of the algorithm without causing more storage demand. They aim to employ more available information from the function values and gradient to approximate the curvature of the objective function. In this paper we derive a new QN method of this type using a fourth order tensor model and show that it is superior with respect to the prior modification of Wei et al. (2006) [4]. Convergence analysis gives the local convergence property of this method and numerical results show the advantage of the modified QN method.


Computers & Mathematics With Applications | 2011

An improved multi-step gradient-type method for large scale optimization

Mahboubeh Farid; Wah June Leong

In this paper, we propose an improved multi-step diagonal updating method for large scale unconstrained optimization. Our approach is based on constructing a new gradient-type method by means of interpolating curves. We measure the distances required to parameterize the interpolating polynomials via a norm defined by a positive-definite matrix. By developing on implicit updating approach we can obtain an improved version of Hessian approximation in diagonal matrix form, while avoiding the computational expenses of actually calculating the improved version of the approximation matrix. The effectiveness of our proposed method is evaluated by means of computational comparison with the BB method and its variants. We show that our method is globally convergent and only requires O(n) memory allocations.


International Journal of Computer Mathematics | 2011

A new gradient method via least change secant update

Wah June Leong; Malik Abu Hassan

The Barzilai–Borwein (BB) gradient method is favourable over the classical steepest descent method both in theory and in real computations. This method takes a ‘fixed’ step size rather than following a set of line search rules to ensure convergence. Along this line, we present a new approach for the two-point approximation to the quasi-Newton equation within the BB framework on the basis of a well-known least change result for the Davidon–Fletcher–Powell update and propose a new gradient method that belongs to the same class of BB gradient method in which the line search procedure is replaced by a fixed step size. Some preliminary numerical results suggest that improvements have been achieved.


Abstract and Applied Analysis | 2014

The Hybrid BFGS-CG Method in Solving Unconstrained Optimization Problems

Mohd Asrul Hery Ibrahim; Mustafa Mamat; Wah June Leong

In solving large scale problems, the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Hence, a new hybrid method, known as the BFGS-CG method, has been created based on these properties, combining the search direction between conjugate gradient methods and quasi-Newton methods. In comparison to standard BFGS methods and conjugate gradient methods, the BFGS-CG method shows significant improvement in the total number of iterations and CPU time required to solve large scale unconstrained optimization problems. We also prove that the hybrid method is globally convergent.


International Journal of Computer Mathematics | 2011

Structured symmetric rank-one method for unconstrained optimization

Farzin Modarres; Malik Abu Hassan; Wah June Leong

In this paper, we investigate a symmetric rank-one (SR1) quasi-Newton (QN) formula in which the Hessian of the objective function has some special structure. Instead of approximating the whole Hessian via the SR1 formula, we consider an approach which only approximates part of the Hessian matrix that is not easily acquired. Although the SR1 update possesses desirable features, it is unstable in the sense that, it may not retain positive definiteness and may become undefined. Therefore, we describe some safeguards to overcome these difficulties. Since the structured SR1 method provides a more accurate Hessian approximation, therefore the proposed method reduces significantly the computational efforts needed in solving a problem. The results of a series of experiments on a typical set of standard unconstrained optimization problems are reported, which show that the structured SR1 method exhibits a clear improvement in numerical performance over some existing QN algorithms.


Abstract and Applied Analysis | 2013

Scaled Diagonal Gradient-Type Method with Extra Update for Large-Scale Unconstrained Optimization

Mahboubeh Farid; Wah June Leong; Najmeh Malekmohammadi; Mustafa Mamat

We present a new gradient method that uses scaling and extra updating within the diagonal updating for solving unconstrained optimization problem. The new method is in the frame of Barzilai and Borwein (BB) method, except that the Hessian matrix is approximated by a diagonal matrix rather than the multiple of identity matrix in the BB method. The main idea is to design a new diagonal updating scheme that incorporates scaling to instantly reduce the large eigenvalues of diagonal approximation and otherwise employs extra updates to increase small eigenvalues. These approaches give us a rapid control in the eigenvalues of the updating matrix and thus improve stepwise convergence. We show that our method is globally convergent. The effectiveness of the method is evaluated by means of numerical comparison with the BB method and its variant.

Collaboration


Dive into the Wah June Leong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mansor Monsi

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Mahboubeh Farid

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Farzin Modarres

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Mustafa Mamat

Universiti Sultan Zainal Abidin

View shared research outputs
Top Co-Authors

Avatar

Fudziah Ismail

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge