Jie Sun
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jie Sun.
Computational Optimization and Applications | 2003
X. D. Chen; Defeng Sun; Jie Sun
Two results on the second-order-cone complementarity problem are presented. We show that the squared smoothing function is strongly semismooth. Under monotonicity and strict feasibility we provide a new proof, based on a penalized natural complementarity function, for the solution set of the second-order-cone complementarity problem being bounded. Numerical results of squared smoothing Newton algorithms are reported.
Computational Optimization and Applications | 2003
Sheng Xu; Robert M. Freund; Jie Sun
Given a set of circles C = {c1, ..., cn} on the Euclidean plane with centers {(a1, b1), ..., (an, bn)} and radii {r1, ..., rn}, the smallest enclosing circle (of fixed circles) problem is to find the circle of minimum radius that encloses all circles in C. We survey four known approaches for this problem, including a second order cone reformulation, a subgradient approach, a quadratic programming scheme, and a randomized incremental algorithm. For the last algorithm we also give some implementation details. It turns out the quadratic programming scheme outperforms the other three in our computational experiment.
Computational Optimization and Applications | 2005
Guanglu Zhou; Kim-Chuan Tohemail; Jie Sun
Consider the problem of computing the smallest enclosing ball of a set of m balls in ℜn. Existing algorithms are known to be inefficient when n > 30. In this paper we develop two algorithms that are particularly suitable for problems where n is large. The first algorithm is based on log-exponential aggregation of the maximum function and reduces the problem into an unconstrained convex program. The second algorithm is based on a second-order cone programming formulation, with special structures taken into consideration. Our computational experiments show that both methods are efficient for large problems, with the product mn on the order of 107. Using the first algorithm, we are able to solve problems with n = 100 and m = 512,000 in about 1 hour.
Computational Optimization and Applications | 2014
Xiaojin Zheng; Xiaoling Sun; Duan Li; Jie Sun
In this paper we consider cardinality-constrained convex programs that minimize a convex function subject to a cardinality constraint and other linear constraints. This class of problems has found many applications, including portfolio selection, subset selection and compressed sensing. We propose a successive convex approximation method for this class of problems in which the cardinality function is first approximated by a piecewise linear DC function (difference of two convex functions) and a sequence of convex subproblems is then constructed by successively linearizing the concave terms of the DC function. Under some mild assumptions, we establish that any accumulation point of the sequence generated by the method is a KKT point of the DC approximation problem. We show that the basic algorithm can be refined by adding strengthening cuts in the subproblems. Finally, we report some preliminary computational results on cardinality-constrained portfolio selection problems.
Computational Optimization and Applications | 2000
Zhi-Quan Luo; Jie Sun
Consider a nonempty convex set in ℝm which is defined by a finite number of smooth convex inequalities and which admits a self-concordant logarithmic barrier. We study the analytic center based column generation algorithm for the problem of finding a feasible point in this set. At each iteration the algorithm computes an approximate analytic center of the set defined by the inequalities generated in the previous iterations. If this approximate analytic center is a solution, then the algorithm terminates; otherwise either an existing inequality is shifted or a new inequality is added into the system. As the number of iterations increases, the set defined by the generated inequalities shrinks and the algorithm eventually finds a solution of the problem. The algorithm can be thought of as an extension of the classical cutting plane method. The difference is that we use analytic centers and “convex cuts” instead of arbitrary infeasible points and linear cuts. In contrast to the cutting plane method, the algorithm has a polynomial worst case complexity of O(Nlog 1/ε) on the total number of cuts to be used, where N is the number of convex inequalities in the original problem and ε is the maximum common slack of the original inequality system.
Optimization Methods & Software | 2006
Jie Sun; Zheng-Hai Huang
By using a smoothing function, the linear complementarity problem (LCP) can be reformulated as a parameterized smooth equation. A Newton method with a projection-type testing procedure is proposed to solve this equation. We show that, for the LCP with a sufficient matrix, the iteration sequence generated by the proposed algorithm is bounded as long as the LCP has a solution. This assumption is weaker than the ones used in most existing smoothing algorithms. Moreover, we show that the proposed algorithm can find a maximally complementary solution to the LCP in a finite number of iterations.
Computational Optimization and Applications | 1999
Gongyun Zhao; Jie Sun
A simple and unified analysis is provided on the rate of local convergence for a class of high-order-infeasible-path-following algorithms for the P*-linear complementarity problem (P*-LCP). It is shown that the rate of local convergence of a ν-order algorithm with a centering step is ν + 1 if there is a strictly complementary solution and (ν + 1)/2 otherwise. For the ν-order algorithm without the centering step the corresponding rates are ν and ν/2, respectively. The algorithm without a centering step does not follow the fixed traditional central path. Instead, at each iteration, it follows a new analytic path connecting the current iterate with an optimal solution to generate the next iterate. An advantage of this algorithm is that it does not restrict iterates in a sequence of contracting neighborhoods of the central path.
Computational Optimization and Applications | 2011
Fanwen Meng; Jie Sun; Mark Goh
This paper is concerned with solving single CVaR and mixed CVaR minimization problems. A CHKS-type smoothing sample average approximation (SAA) method is proposed for solving these two problems, which retains the convexity and smoothness of the original problem and is easy to implement. For any fixed smoothing constantxa0ε, this method produces a sequence whose cluster points are weak stationary points of the CVaR optimization problems with probability one. This framework of combining smoothing technique and SAA scheme can be extended to other smoothing functions as well. Practical numerical examples arising from logistics management are presented to show the usefulness of this method.
Optimization Methods & Software | 2016
Xueting Cui; Shushang Zhu; Duan Li; Jie Sun
The mean–variance (MV) portfolio selection model, which aims to maximize the expected return while minimizing the risk measured by the variance, has been studied extensively in the literature and regarded as a powerful guiding principle in investment practice. Recognizing the importance to reduce the impact of parameter estimation error on the optimal portfolio strategy, we integrate a set of parameter sensitivity constraints into the traditional MV model, which can also be interpreted as a model with marginal risk control on assets. The resulted optimization framework is a quadratic programming problem with non-convex quadratic constraints. By exploiting the special structure of the non-convex constraints, we propose a convex quadratic programming relaxation and develop a branch-and-bound global optimization algorithm. A significant feature of our algorithm is its special branching rule applied to the imposed auxiliary variables, which are of lower dimension than the original decision variables. Our simulation analysis and empirical test demonstrate the pros and cons of the proposed MV model with sensitivity control and indicate the cases where sensitivity control is necessary and beneficial. Our branch-and-bound procedure is shown to be favourable in computational efficiency compared with the commercial global optimization software BARON.
Optimization Methods & Software | 2013
Jie Sun; Xiaoling Sun; Ya-Xiang Yuan
The goal of ICOTA is to provide an international forum for scientists, researchers, software developers, and practitioners to exchange ideas and approaches, to present research findings and solution methodologies, to share experiences on potentials and limits, and to open new avenues of research and developments, on all issues and topics related to optimization and its applications. The first ICOTA was held in 1987 in Singapore. Starting from 1992, ICOTA has been held regularly once every three years. ICOTA is now an official conference series of the Pacific Optimization Research Activity Group.