Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yu-Hong Dai is active.

Publication


Featured researches published by Yu-Hong Dai.


Siam Journal on Optimization | 1999

A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property

Yu-Hong Dai; Ya-Xiang Yuan

Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.


Applied Mathematics and Optimization | 2001

New Conjugacy Conditions and Related Nonlinear Conjugate Gradient Methods

Yu-Hong Dai; Li-Zhi Liao

Abstract. Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems.


Numerische Mathematik | 2005

Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming

Yu-Hong Dai; Roger Fletcher

Summary.This paper studies projected Barzilai-Borwein (PBB) methods for large-scale box-constrained quadratic programming. Recent work on this method has modified the PBB method by incorporating the Grippo-Lampariello-Lucidi (GLL) nonmonotone line search, so as to enable global convergence to be proved. We show by many numerical experiments that the performance of the PBB method deteriorates if the GLL line search is used. We have therefore considered the question of whether the unmodified method is globally convergent, which we show not to be the case, by exhibiting a counter example in which the method cycles. A new projected gradient method (PABB) is then considered that alternately uses the two Barzilai-Borwein steplengths. We also give an example in which this method may cycle, although its practical performance is seen to be superior to the PBB method. With the aim of both ensuring global convergence and preserving the good numerical performance of the unmodified methods, we examine other recent work on nonmonotone line searches, and propose a new adaptive variant with some attractive features. Further numerical experiments show that the PABB method with the adaptive line search is the best BB-like method in the positive definite case, and it compares reasonably well against the GPCG algorithm of Moré and Toraldo. In the indefinite case, the PBB method with the adaptive line search is shown on some examples to find local minima with better solution values, and hence may be preferred for this reason.


Annals of Operations Research | 2001

An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization

Yu-Hong Dai; Ya-Xiang Yuan

Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar βk with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.


IEEE Transactions on Signal Processing | 2011

Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms

Ya-Feng Liu; Yu-Hong Dai; Zhi-Quan Luo

In a cellular wireless system, users located at cell edges often suffer significant out-of-cell interference. Assuming each base station is equipped with multiple antennas, we can model this scenario as a multiple-input single-output (MISO) interference channel. In this paper we consider a coordinated beamforming approach whereby multiple base stations jointly optimize their downlink beamforming vectors in order to simultaneously improve the data rates of a given group of cell edge users. Assuming perfect channel knowledge, we formulate this problem as the maximization of a system utility (which balances user fairness and average user rates), subject to individual power constraints at each base station. We show that, for the single-carrier case and when the number of antennas at each base station is at least two, the optimal coordinated beamforming problem is NP-hard for both the harmonic mean utility and the proportional fairness utility. For general utilities, we propose a cyclic coordinate descent algorithm, which enables each transmitter to update its beamformer locally with limited information exchange and establish its global convergence to a stationary point. We illustrate its effectiveness in computer simulations by using the space matched beamformer as the benchmark.


Journal of Optimization Theory and Applications | 2002

On the nonmonotone line search

Yu-Hong Dai

The technique of nonmonotone line search has received many successful applications and extensions in nonlinear optimization. This paper provides some basic analyses of the nonmonotone line search. Specifically, we analyze the nonmonotone line search methods for general nonconvex functions along different lines. The analyses are helpful in establishing the global convergence of a nonmonotone line search method under weaker conditions on the search direction. We explore also the relations between nonmonotone line search and R-linear convergence assuming that the objective function is uniformly convex. In addition, by taking the inexact Newton method as an example, we observe a numerical drawback of the original nonmonotone line search and suggest a standard Armijo line search when the nonmonotone line search condition is not satisfied by the prior trial steplength. The numerical results show the usefulness of such suggestion for the inexact Newton method.


Siam Journal on Optimization | 2002

Convergence Properties of the BFGS Algoritm

Yu-Hong Dai

The BFGS method is one of the most famous quasi-Newton algorithms for unconstrained optimization. In 1984, Powell presented an example of a function of two variables that shows that the Polak--Ribiere--Polyak (PRP) conjugate gradient method and the BFGS quasi-Newton method may cycle around eight nonstationary points if each line search picks a local minimum that provides a reduction in the objective function. In this paper, a new technique of choosing parameters is introduced, and an example with only six cyclic points is provided. It is also noted through the examples that the BFGS method with Wolfe line searches need not converge for nonconvex objective functions.


Mathematical Programming | 2006

New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds

Yu-Hong Dai; Roger Fletcher

There are many applications related to singly linearly constrained quadratic programs subjected to upper and lower bounds. In this paper, a new algorithm based on secant approximation is provided for the case in which the Hessian matrix is diagonal and positive definite. To deal with the general case where the Hessian is not diagonal, a new efficient projected gradient algorithm is proposed. The basic features of the projected gradient algorithm are: 1) a new formula is used for the stepsize; 2) a recently-established adaptive non-monotone line search is incorporated; and 3) the optimal stepsize is determined by quadratic interpolation if the non-monotone line search criterion fails to be satisfied. Numerical experiments on large-scale random test problems and some medium-scale quadratic programs arising in the training of Support Vector Machines demonstrate the usefulness of these algorithms.


Siam Journal on Optimization | 2013

A Nonlinear Conjugate Gradient Algorithm with an Optimal Property and an Improved Wolfe Line Search

Yu-Hong Dai; Cai-Xia Kou

In this paper, we seek the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method and propose a family of conjugate gradient methods for unconstrained optimization. An improved Wolfe line search is also proposed, which can avoid a numerical drawback of the original Wolfe line search and guarantee the global convergence of the conjugate gradient method under mild conditions. To accelerate the algorithm, we introduce adaptive restarts along negative gradients based on the extent to which the function approximates some quadratic function during previous iterations. Numerical experiments with the CUTEr collection show that the proposed algorithm is promising.


Computational Optimization and Applications | 2006

Gradient Methods with Adaptive Step-Sizes

Bin Zhou; Li Gao; Yu-Hong Dai

Motivated by the superlinear behavior of the Barzilai-Borwein (BB) method for two-dimensional quadratics, we propose two gradient methods which adaptively choose a small step-size or a large step-size at each iteration. The small step-size is primarily used to induce a favorable descent direction for the next iteration, while the large step-size is primarily used to produce a sufficient reduction. Although the new algorithms are still linearly convergent in the quadratic case, numerical experiments on some typical test problems indicate that they compare favorably with the BB method and some other efficient gradient methods.

Collaboration


Dive into the Yu-Hong Dai's collaboration.

Top Co-Authors

Avatar

Ya-Feng Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ya-Xiang Yuan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhi-Quan Luo

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Bo Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Cai-Xia Kou

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Deren Han

Nanjing Normal University

View shared research outputs
Top Co-Authors

Avatar

Fengmin Xu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Rui Diao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Li-Zhi Liao

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar

Shiqian Ma

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge