Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hoai An Le Thi is active.

Publication


Featured researches published by Hoai An Le Thi.


Transactions on Computational Intelligence XIII - Volume 8342 | 2013

Recent Advances in DC Programming and DCA

Tao Pham Dinh; Hoai An Le Thi

Difference of Convex functions DC Programming and DC Algorithm DCA constitute the backbone of Nonconvex Programming and Global Optimization. The paper is devoted to the State of the Art with recent advances of DC Programming and DCA to meet the growing need for nonconvex optimization and global optimization, both in terms of mathematical modeling as in terms of efficient scalable solution methods. After a brief summary of these theoretical and algorithmic tools, we outline the main results on convergence of DCA in DC programming with subanalytic data, exact penalty techniques with/without error bounds in DC programming including mixed integer DC programming, DCA for general DC programs, and DC programming involving the l0-norm via its approximation and penalization.


Journal of Global Optimization | 2012

Exact penalty and error bounds in DC programming

Hoai An Le Thi; Tao Pham Dinh; Huynh Van Ngai

In the present paper, we are concerned with conditions ensuring the exact penalty for nonconvex programming. Firstly, we consider problems with concave objective and constraints. Secondly, we establish various results on error bounds for systems of DC inequalities and exact penalty, with/without error bounds, in DC programming. They permit to recast several class of difficult nonconvex programs into suitable DC programs to be tackled by the efficient DCA.


Advanced Data Analysis and Classification | 2008

A DC programming approach for feature selection in support vector machines learning

Hoai An Le Thi; Hoai Minh Le; Van Vinh Nguyen; Tao Pham Dinh

Feature selection consists of choosing a subset of available features that capture the relevant properties of the data. In supervised pattern classification, a good choice of features is fundamental for building compact and accurate classifiers. In this paper, we develop an efficient feature selection method using the zero-norm l0 in the context of support vector machines (SVMs). Discontinuity at the origin for l0 makes the solution of the corresponding optimization problem difficult to solve. To overcome this drawback, we use a robust DC (difference of convex functions) programming approach which is a general framework for non-convex continuous optimisation. We consider an appropriate continuous approximation to l0 such that the resulting problem can be formulated as a DC program. Our DC algorithm (DCA) has a finite convergence and requires solving one linear program at each iteration. Computational experiments on standard datasets including challenging feature-selection problems of the NIPS 2003 feature selection challenge and gene selection for cancer classification show that the proposed method is promising: while it suppresses up to more than 99% of the features, it can provide a good classification. Moreover, the comparative results illustrate the superiority of the proposed approach over standard methods such as classical SVMs and feature selection concave.


Journal of Global Optimization | 2010

An efficient combined DCA and B&B using DC/SDP relaxation for globally solving binary quadratic programs

Tao Pham Dinh; Nam Nguyen Canh; Hoai An Le Thi

This paper addresses a new continuous approach based on the DC (Difference of Convex functions) programming and DC algorithms (DCA) to Binary quadratic programs (BQP) which play a key role in combinatorial optimization. DCA is completely different from other avalaible methods and featured by generating a convergent finite sequence of feasible binary solutions (obtained by solving linear programs with the same constraint set) with decreasing objective values. DCA is quite simple and inexpensive to handle large-scale problems. In particular DCA is explicit, requiring only matrix-vector products for Unconstrained Binary quadratic programs (UBQP), and can then exploit sparsity in the large-scale setting. To check globality of solutions computed by DCA, we introduce its combination with a customized Branch-and-Bound scheme using DC/SDP relaxation. The combined algorithm allows checking globality of solutions computed by DCA and restarting it if necessary and consequently accelerates the B&B approach. Numerical results on several series test problems provided in OR-Library (Beasley in J Global Optim, 8:429–433, 1996), show the robustness and efficiency of our algorithm with respect to standard methods. In particular DCA provides ϵ-optimal solutions in almost all cases after only one restarting and the combined DCA-B&B-SDP always provides (ϵ−)optimal solutions.This paper addresses a new continuous approach based on the DC (Difference of Convex functions) programming and DC algorithms (DCA) to Binary quadratic programs (BQP) which play a key role in combinatorial optimization. DCA is completely different from other avalaible methods and featured by generating a convergent finite sequence of feasible binary solutions (obtained by solving linear programs with the same constraint set) with decreasing objective values. DCA is quite simple and inexpensive to handle large-scale problems. In particular DCA is explicit, requiring only matrix-vector products for Unconstrained Binary quadratic programs (UBQP), and can then exploit sparsity in the large-scale setting. To check globality of solutions computed by DCA, we introduce its combination with a customized Branch-and-Bound scheme using DC/SDP relaxation. The combined algorithm allows checking globality of solutions computed by DCA and restarting it if necessary and consequently accelerates the B&B approach. Numerical results on several series test problems provided in OR-Library (Beasley in J Global Optim, 8:429–433, 1996), show the robustness and efficiency of our algorithm with respect to standard methods. In particular DCA provides ϵ-optimal solutions in almost all cases after only one restarting and the combined DCA-B&B-SDP always provides (ϵ−)optimal solutions.


Neural Networks | 2014

Feature selection for linear SVMs under uncertain data: Robust optimization based on difference of convex functions algorithms

Hoai An Le Thi; Xuan Thanh Vo; Tao Pham Dinh

In this paper, we consider the problem of feature selection for linear SVMs on uncertain data that is inherently prevalent in almost all datasets. Using principles of Robust Optimization, we propose robust schemes to handle data with ellipsoidal model and box model of uncertainty. The difficulty in treating ℓ0-norm in feature selection problem is overcome by using appropriate approximations and Difference of Convex functions (DC) programming and DC Algorithms (DCA). The computational results show that the proposed robust optimization approaches are superior than a traditional approach in immunizing perturbation of the data.


Machine Learning | 2015

Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

Hoai An Le Thi; Hoai Minh Le; Tao Pham Dinh

We develop an exact penalty approach for feature selection in machine learning via the zero-norm


Discrete Applied Mathematics | 2008

A continuous approach for the concave cost supply problem via DC programming and DCA

Hoai An Le Thi; Tao Pham Dinh


ICCSAMA | 2014

DC Programming and DCA for General DC Programs

Hoai An Le Thi; Van Ngai Huynh; Tao Pham Dinh

\ell _{0}


Journal of Global Optimization | 2013

Binary classification via spherical separator by DC programming and DCA

Hoai An Le Thi; Hoai Minh Le; Tao Pham Dinh; Ngai Van Huynh


Optimization | 2009

DC programming approach for portfolio optimization under step increasing transaction costs

Hoai An Le Thi; Mahdi Moeini; Tao Pham Dinh

ℓ0-regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.We develop an exact penalty approach for feature selection in machine learning via the zero-norm

Collaboration


Dive into the Hoai An Le Thi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Zidna

University of Lorraine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duc Manh Nguyen

Hanoi National University of Education

View shared research outputs
Top Co-Authors

Avatar

Mahdi Moeini

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge