Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Malik Abu Hassan is active.

Publication


Featured researches published by Malik Abu Hassan.


Computers & Mathematics With Applications | 2010

A new two-step gradient-type method for large-scale unconstrained optimization

Mahboubeh Farid; Wah June Leong; Malik Abu Hassan

In this paper, we propose some improvements on a new gradient-type method for solving large-scale unconstrained optimization problems, in which we use data from two previous steps to revise the current approximate Hessian. The new method which we considered, resembles to that of Barzilai and Borwein (BB) method. The innovation features of this approach consist in using approximation of the Hessian in diagonal matrix form based on the modified weak secant equation rather than the multiple of the identity matrix in the BB method. Using this approach, we can obtain a higher order accuracy of Hessian approximation when compares to other existing BB-type method. By incorporating a simple monotone strategy, the global convergence of the new method is achieved. Practical insights into the effectiveness of the proposed method are given by numerical comparison with the BB method and its variant.


Computational Optimization and Applications | 2009

A restarting approach for the symmetric rank one update for unconstrained optimization

Wah June Leong; Malik Abu Hassan

Abstract Two basic disadvantages of the symmetric rank one (SR1) update are that the SR1 update may not preserve positive definiteness when starting with a positive definite approximation and the SR1 update can be undefined. A simple remedy to these problems is to restart the update with the initial approximation, mostly the identity matrix, whenever these difficulties arise. However, numerical experience shows that restart with the identity matrix is not a good choice. Instead of using the identity matrix we used a positive multiple of the identity matrix. The used positive scaling factor is the optimal solution of the measure defined by the problem—maximize the determinant of the update subject to a bound of one on the largest eigenvalue. This measure is motivated by considering the volume of the symmetric difference of the two ellipsoids, which arise from the current and updated quadratic models in quasi-Newton methods. A replacement in the form of a positive multiple of the identity matrix is provided for the SR1 update when it is not positive definite or undefined. Our experiments indicate that with such simple initial scaling the possibility of an undefined update or the loss of positive definiteness for the SR1 method is avoided on all iterations.


Computers & Mathematics With Applications | 2011

A matrix-free quasi-Newton method for solving large-scale nonlinear systems

Wah June Leong; Malik Abu Hassan; Muhammad Yusuf

One of the widely used methods for solving a nonlinear system of equations is the quasi-Newton method. The basic idea underlining this type of method is to approximate the solution of Newtons equation by means of approximating the Jacobian matrix via quasi-Newton update. Application of quasi-Newton methods for large scale problems requires, in principle, vast computational resource to form and store an approximation to the Jacobian matrix of the underlying problem. Hence, this paper proposes an approximation for Newton-step based on the update of approximation requiring a computational effort similar to that of matrix-free settings. It is made possible by approximating the Jacobian into a diagonal matrix using the least-change secant updating strategy, commonly employed in the development of quasi-Newton methods. Under suitable assumptions, local convergence of the proposed method is proved for nonsingular systems. Numerical experiments on popular test problems confirm the effectiveness of the approach in comparison with Newtons, Chord Newtons and Broydens methods.


Journal of Computational and Applied Mathematics | 2011

New quasi-Newton methods via higher order tensor models

Fahimeh Biglari; Malik Abu Hassan; Wah June Leong

Many researches attempt to improve the efficiency of the usual quasi-Newton (QN) methods by accelerating the performance of the algorithm without causing more storage demand. They aim to employ more available information from the function values and gradient to approximate the curvature of the objective function. In this paper we derive a new QN method of this type using a fourth order tensor model and show that it is superior with respect to the prior modification of Wei et al. (2006) [4]. Convergence analysis gives the local convergence property of this method and numerical results show the advantage of the modified QN method.


Stochastics An International Journal of Probability and Stochastic Processes | 2011

Feedback stabilization and adaptive stabilization of stochastic nonlinear systems by the control Lyapunov function

Fakhreddin Abedi; Malik Abu Hassan; Mohammed Suleiman

Our aims of this paper are twofold: On one hand, we study the asymptotic stability in probability of stochastic differential system, when both the drift and diffusion terms are affine in the control. We derive sufficient conditions for the existence of control Lyapunov functions (CLFs) leading to the existence of stabilizing feedback laws which are smooth, except possibly at the equilibrium state. On the other hand, we consider the previous systems with an unknown constant parameters in the drift and introduce the concept of an adaptive CLF for stochastic system and use the stochastic version of Florchingers control law to design an adaptive controller. In this framework, the problem of adaptive stabilization of nonlinear stochastic system is reduced to the problem of non-adaptive stabilization of a modified system.


International Journal of Computer Mathematics | 2011

A new gradient method via least change secant update

Wah June Leong; Malik Abu Hassan

The Barzilai–Borwein (BB) gradient method is favourable over the classical steepest descent method both in theory and in real computations. This method takes a ‘fixed’ step size rather than following a set of line search rules to ensure convergence. Along this line, we present a new approach for the two-point approximation to the quasi-Newton equation within the BB framework on the basis of a well-known least change result for the Davidon–Fletcher–Powell update and propose a new gradient method that belongs to the same class of BB gradient method in which the line search procedure is replaced by a fixed step size. Some preliminary numerical results suggest that improvements have been achieved.


International Journal of Computer Mathematics | 2011

Structured symmetric rank-one method for unconstrained optimization

Farzin Modarres; Malik Abu Hassan; Wah June Leong

In this paper, we investigate a symmetric rank-one (SR1) quasi-Newton (QN) formula in which the Hessian of the objective function has some special structure. Instead of approximating the whole Hessian via the SR1 formula, we consider an approach which only approximates part of the Hessian matrix that is not easily acquired. Although the SR1 update possesses desirable features, it is unstable in the sense that, it may not retain positive definiteness and may become undefined. Therefore, we describe some safeguards to overcome these difficulties. Since the structured SR1 method provides a more accurate Hessian approximation, therefore the proposed method reduces significantly the computational efforts needed in solving a problem. The results of a series of experiments on a typical set of standard unconstrained optimization problems are reported, which show that the structured SR1 method exhibits a clear improvement in numerical performance over some existing QN algorithms.


Computers & Mathematics With Applications | 2011

A symmetric rank-one method based on extra updating techniques for unconstrained optimization

Farzin Modarres; Malik Abu Hassan; Wah June Leong

Abstract In this paper, we present a new symmetric rank-one (SR1) method for the solution of unconstrained optimization problems. The proposed method involves an algorithm in which the usual SR1 Hessian is updated a number of times in a way to be specified in some iterations, to improve the performance of the Hessian approximation. In particular, we discuss how to consider a criterion for indicating at each iteration whether it is necessary to employ extra updates. However it is well known that there are some theoretical difficulties when applying the SR1 update. Even for a current positive definite Hessian approximation, it is possible that the SR1 update may not be defined or the SR1 update may not preserve positive definiteness at some iterations. We then employ a restarting procedure that guarantees that updated matrices will be well-defined while preserving positive definiteness of updates. Numerical results support these theoretical considerations. They show that the implementation of the SR1 method using extra updating techniques improves the performance of the SR1 method substantially for a number of test problems from the literature.


Applied Mathematics and Computation | 2011

Scaled memoryless symmetric rank one method for large-scale optimization

Wah June Leong; Malik Abu Hassan

This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for solving large-scale unconstrained optimization problems is developed. The basic idea is to incorporate the SR1 update within the framework of the memoryless quasi-Newton method. However, it is well-known that the SR1 update may not preserve positive definiteness even when updated from a positive definite matrix. Therefore we propose the memoryless SR1 method, which is updated from a positive scaled of the identity, where the scaling factor is derived in such a way that positive definiteness of the updating matrices are preserved and at the same time improves the condition of the scaled memoryless SR1 update. Under very mild conditions it is shown that, for strictly convex objective functions, the method is globally convergent with a linear rate of convergence. Numerical results show that the optimally scaled memoryless SR1 method is very encouraging.


international conference on modeling, simulation, and applied optimization | 2011

Two-step diagonal Newton method for large-scale systems of nonlinear equations

Mohammed Yusuf Waziri; Wah June Leong; Malik Abu Hassan; Mansor Monsi

We propose some improvements on a diagonal Newtons method for solving large-scale systems of nonlinear equations. In this approach, we use data from two preceding steps to improve the current approximate Jacobian in diagonal form. Via this approach, we can achieve a higher order of accuracy for Jacobian approximation when compares to other existing diagonal-type Newtons method. The results of our numerical tests, demonstrate a clear enhancement in numerical performance of our proposed method

Collaboration


Dive into the Malik Abu Hassan's collaboration.

Top Co-Authors

Avatar

Wah June Leong

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Mansor Monsi

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fudziah Ismail

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Mahboubeh Farid

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Farzin Modarres

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar

Leong Wah June

Universiti Putra Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge