Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiantao Xiao is active.

Publication


Featured researches published by Xiantao Xiao.


Mathematical Methods of Operations Research | 2010

A Perturbation approach for an inverse quadratic programming problem

Jianzhong Zhang; Liwei Zhang; Xiantao Xiao

We consider an inverse quadratic programming (QP) problem in which the parameters in both the objective function and the constraint set of a given QP problem need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a linear complementarity constrained minimization problem with a positive semidefinite cone constraint. With the help of duality theory, we reformulate this problem as a linear complementarity constrained semismoothly differentiable (SC1) optimization problem with fewer variables than the original one. We propose a perturbation approach to solve the reformulated problem and demonstrate its global convergence. An inexact Newton method is constructed to solve the perturbed problem and its global convergence and local quadratic convergence rate are shown. As the objective function of the problem is a SC1 function involving the projection operator onto the cone of positively semi-definite symmetric matrices, the analysis requires an implicit function theorem for semismooth functions as well as properties of the projection operator in the symmetric-matrix space. Since an approximate proximal point is required in the inexact Newton method, we also give a Newton method to obtain it. Finally we report our numerical results showing that the proposed approach is quite effective.


Applied Mathematics and Computation | 2007

Two differential equation systems for equality-constrained optimization

Li Jin; Liwei Zhang; Xiantao Xiao

Abstract This paper presents two differential systems, involving first and second order derivatives of problem functions, respectively, for solving equality-constrained optimization problems. Local minimizers to the optimization problems are proved to be asymptotically stable equilibrium points of the two differential systems. First, the Euler discrete schemes with constant stepsizes for the two differential systems are presented and their convergence theorems are demonstrated. Second, we construct algorithms in which directions are computed by these two systems and the stepsizes are generated by Armijo line search to solve the original equality-constrained optimization problem. The constructed algorithms and the Runge–Kutta method are employed to solve the Euler discrete schemes and the differential equation systems, respectively. We prove that the discrete scheme based on the differential equation system with the second order information has the locally quadratic convergence rate under the local Lipschitz condition. The numerical results given here show that Runge–Kutta method has better stability and higher precision and the numerical method based on the differential equation system with the second information is faster than the other one.


International Journal of Computer Mathematics | 2011

A perturbation approach for a type of inverse linear programming problems

Yong Jiang; Xiantao Xiao; Liwei Zhang; Jianzhong Zhang

We consider an inverse linear programming (LP) problem in which the parameters in both the objective function and the constraint set of a given LP problem need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a linear complementarity constrained minimization problem. With the help of the smoothed Fischer–Burmeister function, we propose a perturbation approach to solve the inverse problem and demonstrate its global convergence. An inexact Newton method is constructed to solve the perturbed problem and numerical results are reported to show the effectiveness of the approach.


Asia-Pacific Journal of Operational Research | 2008

A Class Of Nonlinear Lagrangians: Theory And Algorithm

Liwei Zhang; Yong-Hong Ren; Yue Wu; Xiantao Xiao

This paper establishes a theory framework of a class of nonlinear Lagrangians for solving nonlinear programming problems with inequality constraints. A set of conditions are proposed to guarantee the convergence of nonlinear Lagrangian algorithms, to analyze condition numbers of nonlinear Lagrangian Hessians as well as to develop the dual approaches. These conditions are satisfied by well-known nonlinear Lagrangians appearing in literature. The convergence theorem shows that the dual algorithm based on any nonlinear Lagrangian in the class is locally convergent when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions and the error bound solution, depending on the penalty parameter, is also established. The paper also develops the dual problems based on the proposed nonlinear Lagrangians, and the related duality theorem and saddle point theorem are demonstrated. Furthermore, it is shown that the condition numbers of Lagrangian Hessians at optimal solutions are proportional to the controlling penalty parameters. We report some numerical results obtained by using nonlinear Lagrangians.


Journal of Optimization Theory and Applications | 2014

A Smoothing Function Approach to Joint Chance-Constrained Programs

Feng Shan; Liwei Zhang; Xiantao Xiao

In this article, we consider a DC (difference of two convex functions) function approach for solving joint chance-constrained programs (JCCP), which was first established by Hong et al. (Oper Res 59:617–630, 2011). They used a DC function to approximate the probability function and constructed a sequential convex approximation method to solve the approximation problem. However, the DC function they used was nondifferentiable. To alleviate this difficulty, we propose a class of smoothing functions to approximate the joint chance-constraint function, based on which smooth optimization problems are constructed to approximate JCCP. We show that the solutions of a sequence of smoothing approximations converge to a Karush–Kuhn–Tucker point of JCCP under a certain asymptotic regime. To implement the proposed method, four examples in the class of smoothing functions are explored. Moreover, the numerical experiments show that our method is comparable and effective.


Computational Optimization and Applications | 2011

A class of nonlinear Lagrangians for nonconvex second order cone programming

Liwei Zhang; Jian Gu; Xiantao Xiao

This paper focuses on the study of a class of nonlinear Lagrangians for solving nonconvex second order cone programming problems. The nonlinear Lagrangians are generated by Löwner operators associated with convex real-valued functions. A set of conditions on the convex real-valued functions are proposed to guarantee the convergence of nonlinear Lagrangian algorithms. These conditions are satisfied by well-known nonlinear Lagrangians appeared in the literature. The convergence properties for the nonlinear Lagrange method are discussed when subproblems are assumed to be solved exactly and inexactly, respectively. The convergence theorems show that, under the second order sufficient conditions with sigma-term and the strict constraint nondegeneracy condition, the algorithm based on any of nonlinear Lagrangians in the class is locally convergent when the penalty parameter is less than a threshold and the error bound of solution is proportional to the penalty parameter. Compared to the analysis in nonlinear Lagrangian methods for nonlinear programming, we have to deal with the sigma term in the convergence analysis. Finally, we report numerical results by using modified Frisch’s function, modified Carroll’s function and the Log-Sigmoid function.


Applied Mathematics and Computation | 2007

A nonlinear Lagrangian based on Fischer-Burmeister NCP function

Yong-Hong Ren; Liwei Zhang; Xiantao Xiao

This paper proposes a nonlinear Lagrangian Based on Fischer-Burmeister NCP function for solving nonlinear programming problems with inequality constraints. The convergence theorem shows that the sequence of points generated by this nonlinear Lagrange algorithm is locally convergent when the penalty parameter is less than a threshold under a set of suitable conditions on problem functions, and the error bound of solution, depending on the penalty parameter, is also established. Moreover, the paper develops the dual approach associated with the proposed nonlinear Lagrangian, in which the related duality theorem is demonstrated. Furthermore, it is shown that the condition number of the nonlinear Lagrangian Hessian at the optimal solution is proportional to the controlling penalty parameter. Numerical results for solving several nonlinear programming problems are reported, showing that the new nonlinear Lagrangian is superior over other known nonlinear Lagrangians for solving some nonlinear programming problems.


Optimization | 2016

Convergence analysis on a smoothing approach to joint chance constrained programs

F. Shan; Xiantao Xiao; Liwei Zhang

This paper aims to solve the joint chance constrained programs (JCCP) by a DC (difference of two convex functions) function approach, which was established by Hong et al. [Oper. Res. 2011;59:617–630]. They used a DC function to approximate the chance constraint function and constructed a sequential convex approximation method to solve the approximation problem. A disadvantage of this method is perhaps that the DC function they used is nonsmooth. In this article, we first propose a class of smoothing functions to approximate the maximum function and the indicator function . Then, we construct the conservative smooth DC approximation function to and obtain the smooth DC approximation problems to JCCPs. We show that the solutions of a sequence of smooth approximation problems converge to some Karush–Kuhn–Tucker point of JCCPs under a certain asymptotic regime.


Nonlinear Analysis-theory Methods & Applications | 2008

An algorithm based on resolvent operators for solving variational inequalities in Hilbert spaces

Juhe Sun; Liwei Zhang; Xiantao Xiao


Journal of Industrial and Management Optimization | 2012

A sequential convex program method to DC program withjoint chance constraints

Xiantao Xiao; Jian Gu; Liwei Zhang; Shaowu Zhang

Collaboration


Dive into the Xiantao Xiao's collaboration.

Top Co-Authors

Avatar

Liwei Zhang

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jian Gu

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ning Zhang

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yong-Hong Ren

Liaoning Normal University

View shared research outputs
Top Co-Authors

Avatar

Jianzhong Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianzhong Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Juhe Sun

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Li Jin

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge