Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ya-Xiang Yuan is active.

Publication


Featured researches published by Ya-Xiang Yuan.


Siam Journal on Optimization | 1999

A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property

Yu-Hong Dai; Ya-Xiang Yuan

Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.


SIAM Journal on Numerical Analysis | 1987

GLOBAL CONVERGENCE OF A CLASS OF QUASI-NEWTON METHODS ON CONVEX PROBLEMS.

Richard H. Byrd; Jorge Nocedal; Ya-Xiang Yuan

We study the global convergence properties of the restricted Broyden class of quasi-Newton methods, when applied to a convex objective function. We assume that the line search satisfies a standard sufficient decrease condition and that the initial Hessian approximation is any positive definite matrix. We show global and superlinear convergence for this class of methods, except for DFP. This generalizes Powell’s well-known result for the BFGS method. The analysis gives us insight into the properties of these algorithms; in particular it shows that DFP lacks a very desirable self-correcting property possessed by BFGS.


Mathematical Programming | 1990

A trust region algorithm for equality constrained optimization

M. J. D. Powell; Ya-Xiang Yuan

A trust region algorithm for equality constrained optimization is proposed that employs a differentiable exact penalty function. Under certain conditions global convergence and local superlinear convergence results are proved.


Annals of Operations Research | 2001

An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization

Yu-Hong Dai; Ya-Xiang Yuan

Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar βk with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.


Computing | 2005

On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption

Jinyan Fan; Ya-Xiang Yuan

Recently, Yamashita and Fukushima [11] established an interesting quadratic convergence result for the Levenberg-Marquardt method without the nonsingularity assumption. This paper extends the result of Yamashita and Fukushima by using μk=||F(xk)||δ, where δ∈[1,2], instead of μk=||F(xk)||2 as the Levenberg-Marquardt parameter. If ||F(x)|| provides a local error bound for the system of nonlinear equations F(x)=0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xk,X*)→0 given by Yamashita and Fukushima. Numerical results show that the method performs well for singular problems.


Mathematical Programming | 1990

On a subproblem of trust region algorithms for constrained optimization

Ya-Xiang Yuan

We study a subproblem that arises in some trust region algorithms for equality constrained optimization. It is the minimization of a general quadratic function with two special quadratic constraints. Properties of such subproblems are given. It is proved that the Hessian of the Lagrangian has at most one negative eigenvalue, and an example is presented to show that the Hessian may have a negative eigenvalue when one constraint is inactive at the solution.


Mathematical Programming | 1986

A recursive quadratic programming algorithm that uses differentiable exact penalty functions

M. J. D. Powell; Ya-Xiang Yuan

In this paper, a recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied. The line search functions used are approximations to Fletchers differentiable exact penalty function. Global convergence and local superlinear convergence results are proved, and some numerical results are given.


Siam Journal on Optimization | 1997

Optimality Conditions for the Minimization of a Quadratic with Two Quadratic Constraints

Jiming Peng; Ya-Xiang Yuan

The trust region method has been proven to be very successful in both unconstrained and constrained optimization. It requires the global minimum of a general quadratic function subject to ellipsoid constraints. In this paper, we generalize the trust region subproblem by allowing two general quadratic constraints. Conditions and properties of its solution are discussed.


Computational Optimization and Applications | 2002

Modified Two-Point Stepsize Gradient Methods for Unconstrained Optimization

Yu-Hong Dai; Jin Yun Yuan; Ya-Xiang Yuan

For unconstrained optimization, the two-point stepsize gradient method is preferable over the classical steepest descent method both in theory and in real computations. In this paper we interpret the choice for the stepsize in the two-point stepsize gradient method from the angle of interpolation and propose two modified two-point stepsize gradient methods. The modified methods are globally convergent under some mild assumptions on the objective function. Numerical results are reported, which suggest that improvements have been achieved.


Mathematical Programming | 2000

On the truncated conjugate gradient method

Ya-Xiang Yuan

Abstract.In this paper, we consider the truncated conjugate gradient method for minimizing a convex quadratic function subject to a ball trust region constraint. It is shown that the reduction in the objective function by the solution obtained by the truncated CG method is at least half of the reduction by the global minimizer in the trust region.

Collaboration


Dive into the Ya-Xiang Yuan's collaboration.

Top Co-Authors

Avatar

Yu-Hong Dai

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xin Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jin Yun Yuan

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar

Xiao Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zaiwen Wen

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yanfei Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cong Sun

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge