Zhiyou Wu
Chongqing Normal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhiyou Wu.
Mathematical Programming | 2007
V. Jeyakumar; Alex M. Rubinov; Zhiyou Wu
In this paper, we first examine how global optimality of non-convex constrained optimization problems is related to Lagrange multiplier conditions. We then establish Lagrange multiplier conditions for global optimality of general quadratic minimization problems with quadratic constraints. We also obtain necessary global optimality conditions, which are different from the Lagrange multiplier conditions for special classes of quadratic optimization problems. These classes include weighted least squares with ellipsoidal constraints, and quadratic minimization with binary constraints. We discuss examples which demonstrate that our optimality conditions can effectively be used for identifying global minimizers of certain multi-extremal non-convex quadratic optimization problems.
Journal of Global Optimization | 2006
V. Jeyakumar; Alex M. Rubinov; Zhiyou Wu
In this paper we establish conditions which ensure that a feasible point is a global minimizer of a quadratic minimization problem subject to box constraints or binary constraints. In particular, we show that our conditions provide a complete characterization of global optimality for non-convex weighted least squares minimization problems. We present a new approach which makes use of a global subdifferential. It is formed by a set of functions which are not necessarily linear functions, and it enjoys explicit descriptions for quadratic functions. We also provide numerical examples to illustrate our optimality conditions.
Journal of Global Optimization | 2007
Zhiyou Wu; Fu-heng. Bai; H. W. J. Lee; Y. J. Yang
In this paper, a filled function method for solving constrained global optimization problems is proposed. A filled function is proposed for escaping the current local minimizer of a constrained global optimization problem by combining the idea of filled function in unconstrained global optimization and the idea of penalty function in constrained optimization. Then a filled function method for obtaining a global minimizer or an approximate global minimizer of the constrained global optimization problem is presented. Some numerical results demonstrate the efficiency of this global optimization method for solving constrained global optimization problems.
Optimization | 2004
Zhiyou Wu; Fu-Sheng Bai; Xiaoqi Yang; Lian-Sheng Zhang
In this article, we consider a lower order penalty function and its ε-smoothing for an inequality constrained nonlinear programming problem. It is shown that any strict local minimum satisfying the second-order sufficiency condition for the original problem is a strict local minimum of the lower order penalty function with any positive penalty parameter. By using an ε-smoothing approximation to the lower order penalty function, we get a modified smooth global exact penalty function under mild assumptions.
Journal of Global Optimization | 2006
V. Jeyakumar; Zhiyou Wu; G. M. Lee; N. Dinh
In convex optimization the significance of constraint qualifications is evidenced by the simple duality theory, and the elegant subgradient optimality conditions which completely characterize a minimizer. However, the constraint qualifications do not always hold even for finite dimensional optimization problems and frequently fail for infinite dimensional problems. In the present work we take a broader view of the subgradient optimality conditions by allowing them to depend on a sequence of ε-subgradients at a minimizer and then by letting them to hold in the limit. Liberating the optimality conditions in this way permits us to obtain a complete characterization of optimality without a constraint qualification. As an easy consequence of these results we obtain optimality conditions for conic convex optimization problems without a constraint qualification. We derive these conditions by applying a powerful combination of conjugate analysis and ε-subdifferential calculus. Numerical examples are discussed to illustrate the significance of the sequential conditions.
Journal of Global Optimization | 2015
Jueyou Li; Changzhi Wu; Zhiyou Wu; Qiang Long
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to
Applied Mathematics and Computation | 2006
Y. H. Gu; Zhiyou Wu
Neurocomputing | 2016
Jueyou Li; Guo Chen; Zhao Yang Dong; Zhiyou Wu
d
Optimization | 2009
Zhiyou Wu; Fu-Sheng Bai
Journal of Global Optimization | 2007
Zhiyou Wu
d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.