Changzhi Wu
Curtin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Changzhi Wu.
Swarm and evolutionary computation | 2015
Qiang Long; Changzhi Wu; Tingwen Huang; Xiangyu Wang
Abstract In this paper, we propose a genetic algorithm for unconstrained multi-objective optimization. Multi-objective genetic algorithm (MOGA) is a direct method for multi-objective optimization problems. Compared to the traditional multi-objective optimization method whose aim is to find a single Pareto solution, MOGA tends to find a representation of the whole Pareto frontier. During the process of solving multi-objective optimization problems using genetic algorithm, one needs to synthetically consider the fitness, diversity and elitism of solutions. In this paper, more specifically, the optimal sequence method is altered to evaluate the fitness; cell-based density and Pareto-based ranking are combined to achieve diversity; and the elitism of solutions is maintained by greedy selection. To compare the proposed method with others, a numerical performance evaluation system is developed. We test the proposed method by some well known multi-objective benchmarks and compare its results with other MOGASs׳; the result show that the proposed method is robust and efficient.
Journal of Global Optimization | 2015
Jueyou Li; Changzhi Wu; Zhiyou Wu; Qiang Long
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to
Journal of Optimization Theory and Applications | 2016
Jueyou Li; Zhiyou Wu; Changzhi Wu; Qiang Long; Xiangyu Wang
Mathematical Problems in Engineering | 2015
Qiang Long; Changzhi Wu; Xiangyu Wang; Lin Jiang; Jueyou Li
d
Optimization Letters | 2018
Jueyou Li; Guoquan Li; Zhiyou Wu; Changzhi Wu
Applied Mathematics and Computation | 2015
Qiang Long; Changzhi Wu; Xiangyu Wang
d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
Anziam Journal | 2014
Jueyou Li; Changzhi Wu; Zhiyou Wu; Qiang Long; Xiangyu Wang
In this paper, a class of separable convex optimization problems with linear coupled constraints is studied. According to the Lagrangian duality, the linear coupled constraints are appended to the objective function. Then, a fast gradient-projection method is introduced to update the Lagrangian multiplier, and an inexact solution method is proposed to solve the inner problems. The advantage of our proposed method is that the inner problems can be solved in an inexact and parallel manner. The established convergence results show that our proposed algorithm still achieves optimal convergence rate even though the inner problems are solved inexactly. Finally, several numerical experiments are presented to illustrate the efficiency and effectiveness of our proposed algorithm.
The 9th International Conference on Optimization: Techniques and Applications (ICOTA9) | 2015
Lin Jiang; Changzhi Wu; Xiangyu Wang; Kok Lay Teo
Multiobjective genetic algorithm (MOGA) is a direct search method for multiobjective optimization problems. It is based on the process of the genetic algorithm; the population-based property of the genetic algorithm is well applied in MOGAs. Comparing with the traditional multiobjective algorithm whose aim is to find a single Pareto solution, the MOGA intends to identify numbers of Pareto solutions. During the process of solving multiobjective optimization problems using genetic algorithm, one needs to consider the elitism and diversity of solutions. But, normally, there are some trade-offs between the elitism and diversity. For some multiobjective problems, elitism and diversity are conflicting with each other. Therefore, solutions obtained by applying MOGAs have to be balanced with respect to elitism and diversity. In this paper, we propose metrics to numerically measure the elitism and diversity of solutions, and the optimum order method is applied to identify these solutions with better elitism and diversity metrics. We test the proposed method by some well-known benchmarks and compare its numerical performance with other MOGAs; the result shows that the proposed method is efficient and robust.
Optimization and Control Methods in Industrial Engineering 11and Construction | 2014
Changzhi Wu; Xiangyu Wang; Kok Lay Teo; Lin Jiang
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.
Journal of Industrial and Management Optimization | 2013
Changzhi Wu; Chaojie Li; Qiang Long
In this paper, a subgradient method is developed to solve the system of (nonsmooth) equations. First, the system of (nonsmooth) equations is transformed into a nonsmooth optimization problem with zero minimal objective function value. Then, a subgradient method is applied to solve the nonsmooth optimization problem. During the processes, the pre-known optimal objective function value is adopted to update step sizes. The corresponding convergence results are established as well. Several numerical experiments and applications show that the proposed method is efficient and robust.