Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zengxin Wei is active.

Publication


Featured researches published by Zengxin Wei.


Applied Mathematics and Computation | 2006

The convergence properties of some new conjugate gradient methods

Zengxin Wei; Shengwei Yao; Liying Liu

Abstract In this paper, a new conjugate gradient formula β k ∗ is given to compute the search directions for unconstrained optimization problems. General convergence results for the proposed formula with some line searches such as the exact line search, the Wolfe–Powell line search and the Grippo–Lucidi line search are discussed. Under the above line searches and some assumptions, the global convergence properties of the given methods are discussed. The given formula β k ∗ ⩾ 0 , and has the similar form with β k PRP . Preliminary numerical results show that the proposed methods are efficient.


Journal of Computational and Applied Mathematics | 2009

A conjugate gradient method with descent direction for unconstrained optimization

Gonglin Yuan; Xiwen Lu; Zengxin Wei

A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe-Powell line search technique; (iv) This method inherits an important property of the well-known Polak-Ribiere-Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.


Computational Optimization and Applications | 2004

The Superlinear Convergence of a Modified BFGS-Type Method for Unconstrained Optimization

Zengxin Wei; Gaohang Yu; Gonglin Yuan; Zhigang Lian

The BFGS method is the most effective of the quasi-Newton methods for solving unconstrained optimization problems. Wei, Li, and Qi [16] have proposed some modified BFGS methods based on the new quasi-Newton equation Bk+1sk = y*k, where y*k is the sum of yk and Aksk, and Ak is some matrix. The average performance of Algorithm 4.3 in [16] is better than that of the BFGS method, but its superlinear convergence is still open. This article proves the superlinear convergence of Algorithm 4.3 under some suitable conditions.


Applied Mathematics and Computation | 2006

New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems

Zengxin Wei; Guoyin Li; Liqun Qi

Abstract We propose new conjugate gradient formulas for computing the search directions for unconstrained optimization problems. The new formulas turn out to be the conjugate descent formula if exact line searches are made. Some formulas possess the sufficient descent property without any line searches. General convergence results for the proposed formulas with the weak Wolfe–Powell conditions are studied. We prove that some of the formulas with the steplength technique which ensures the Zoutendijk condition to be held are globally convergent. In addition, the global convergence results for some other formulas with the standard Armijo line search are also given. Preliminary numerical results show that the proposed methods are very promising.


Computational Optimization and Applications | 2010

Convergence analysis of a modified BFGS method on convex minimizations

Gonglin Yuan; Zengxin Wei

A modified BFGS method is proposed for unconstrained optimization. The global convergence and the superlinear convergence of the convex functions are established under suitable assumptions. Numerical results show that this method is interesting.


Journal of Computational and Applied Mathematics | 2014

A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs

Gonglin Yuan; Zengxin Wei; Guoyin Li

The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods is mainly restricted to solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization problems. By using the Moreau-Yosida regulation (smoothing) approach and a nonmonotone line search technique, we propose a modified Polak-Ribiere-Polyak (PRP) CG algorithm for solving a nonsmooth unconstrained convex minimization problem. Our algorithm possesses the following three desired properties. (i) The search direction satisfies the sufficient descent property and belongs to a trust region automatically; (ii) the search direction makes use of not only the gradient information but also the function value information; and (iii) the algorithm inherits an important property of the well-known PRP method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. Under standard conditions, we show that the algorithm converges globally to an optimal solution. Numerical experiment shows that our algorithm is effective and suitable for solving large-scale nonsmooth unconstrained convex optimization problems.


Computing | 2011

A BFGS trust-region method for nonlinear equations

Gonglin Yuan; Zengxin Wei; Xiwen Lu

In this paper, a new trust-region subproblem combining with the BFGS update is proposed for solving nonlinear equations, where the trust region radius is defined by a new way. The global convergence without the nondegeneracy assumption and the quadratic convergence are obtained under suitable conditions. Numerical results show that this method is more effective than the norm method.


Applied Mathematics and Computation | 2007

A descent nonlinear conjugate gradient method for large-scale unconstrained optimization

Gaohang Yu; Yanlin Zhao; Zengxin Wei

In this paper, a new nonlinear conjugate gradient method was proposed for large-scale unconstrained optimization which possesses the following three properties: (i) the sufficient descent property holds without any line searches; (ii) employing some steplength technique which ensures the Zoutendijk condition to be held, this method is globally convergent; (iii) this method inherits an important property of the Polak-Ribiere-Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. Preliminary numerical results show that this method is very promising.


Applied Mathematics and Computation | 2005

A nonmonotone trust region method for unconstrained optimization

Jiangtao Mo; Kecun Zhang; Zengxin Wei

In this paper, we propose a nonmonotone trust region method for unconstrained optimization. Our method can be regarded as a combination of nonmonotone technique, fixed steplength and trust region method. When a trial step is not accepted, the method does not resolve the subproblem but generates a iterative point whose steplength is defined by a formula. We only allow increase in function value when trial steps are not accepted in close succession of iterations. Under mild conditions, we prove that the algorithm is global convergence and superlinear convergence. Primary numerical results are reported.


Applied Mathematics and Computation | 2007

A note about WYL's conjugate gradient method and its applications

Yao Shengwei; Zengxin Wei; Hai Huang

This paper reviews the development of different versions of nonlinear conjugate gradient methods shortly, the special attentions were given to the WYL method which was proposed by [Zengxin Wei et al., The convergence properties of some conjugate gradient methods, Applied Mathematics and Computation 183 (2006) 1341-1350] and its applications.

Collaboration


Dive into the Zengxin Wei's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liqun Qi

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Guoyin Li

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gaohang Yu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiwen Lu

East China University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge