Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johan A. K. Suykens is active.

Publication


Featured researches published by Johan A. K. Suykens.


Neural Processing Letters | 1999

Least Squares Support Vector Machine Classifiers

Johan A. K. Suykens; Joos Vandewalle

In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVMs. The approach is illustrated on a two-spiral benchmark classification problem.


Neurocomputing | 2002

Weighted least squares support vector machines : Robustness and sparse approximation

Johan A. K. Suykens; J. De Brabanter; Lukas Lukas; Joos Vandewalle

Abstract Least squares support vector machines (LS-SVM) is an SVM version which involves equality instead of inequality constraints and works with a least squares cost function. In this way, the solution follows from a linear Karush–Kuhn–Tucker system instead of a quadratic programming problem. However, sparseness is lost in the LS-SVM case and the estimation of the support values is only optimal in the case of a Gaussian distribution of the error variables. In this paper, we discuss a method which can overcome these two drawbacks. We show how to obtain robust estimates for regression by applying a weighted version of LS-SVM. We also discuss a sparse approximation procedure for weighted and unweighted LS-SVM. It is basically a pruning method which is able to do pruning based upon the physical meaning of the sorted support values, while pruning procedures for classical multilayer perceptrons require the computation of a Hessian matrix or its inverse. The methods of this paper are illustrated for RBF kernels and demonstrate how to obtain robust estimates with selection of an appropriate number of hidden units, in the case of outliers or non-Gaussian error distributions with heavy tails.


Machine Learning | 2004

Benchmarking Least Squares Support Vector Machine Classifiers

Tony Van Gestel; Johan A. K. Suykens; Bart Baesens; Stijn Viaene; Jan Vanthienen; Guido Dedene; Bart De Moor; Joos Vandewalle

In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fishers linear discriminant analysis in the feature space. Multiclass categorization problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LS-SVM classifier, the sparseness property of SVMs is lost due to the choice of the 2-norm. Sparseness can be imposed in a second stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function (RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described in the literature including decision tree based algorithms, statistical algorithms and instance based learning methods. We show on ten UCI datasets that the LS-SVM sparse approximation procedure can be successfully applied.


Journal of the Operational Research Society | 2003

Benchmarking state-of-the-art classification algorithms for credit scoring

Bart Baesens; T. Van Gestel; Stijn Viaene; M Stepanova; Johan A. K. Suykens; Jan Vanthienen

In this paper, we study the performance of various state-of-the-art classification algorithms applied to eight real-life credit scoring data sets. Some of the data sets originate from major Benelux and UK financial institutions. Different types of classifiers are evaluated and compared. Besides the well-known classification algorithms (eg logistic regression, discriminant analysis, k-nearest neighbour, neural networks and decision trees), this study also investigates the suitability and performance of some recently proposed, advanced kernel-based classification algorithms such as support vector machines and least-squares support vector machines (LS-SVMs). The performance is assessed using the classification accuracy and the area under the receiver operating characteristic curve. Statistically significant performance differences are identified using the appropriate test statistics. It is found that both the LS-SVM and neural network classifiers yield a very good performance, but also simple classifiers such as logistic regression and linear discriminant analysis perform very well for credit scoring.


Neural Networks | 2001

Optimal control by least squares support vector machines

Johan A. K. Suykens; Joos Vandewalle; B. De Moor

Support vector machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares support vector machines (LS-SVMs) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares support vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for support vector machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercers condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the number of training data, while in the primal space the number of unknowns can be infinite dimensional. The method is illustrated both on stabilization and tracking problems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.


IEEE Transactions on Neural Networks | 2001

Financial time series prediction using least squares support vector machines within the evidence framework

T. Van Gestel; Johan A. K. Suykens; Dirk-Emma Baestaens; A. Lambrechts; Gert R. G. Lanckriet; B. Vandaele; B. De Moor; Joos Vandewalle

The Bayesian evidence framework is applied in this paper to least squares support vector machine (LS-SVM) regression in order to infer nonlinear models for predicting a financial time series and the related volatility. On the first level of inference, a statistical framework is related to the LS-SVM formulation which allows one to include the time-varying volatility of the market by an appropriate choice of several hyper-parameters. The hyper-parameters of the model are inferred on the second level of inference. The inferred hyper-parameters, related to the volatility, are used to construct a volatility model within the evidence framework. Model comparison is performed on the third level of inference in order to automatically tune the parameters of the kernel function and to select the relevant inputs. The LS-SVM formulation allows one to derive analytic expressions in the feature space and practical expressions are obtained in the dual space replacing the inner product by the related kernel function using Mercers theorem. The one step ahead prediction performances obtained on the prediction of the weekly 90-day T-bill rate and the daily DAX30 closing prices show that significant out of sample sign predictions can be made with respect to the Pesaran-Timmerman test statistic.


international symposium on circuits and systems | 2000

Sparse approximation using least squares support vector machines

Johan A. K. Suykens; Lukas Lukas; Joos Vandewalle

In least squares support vector machines (LS-SVMs) for function estimation Vapniks /spl epsiv/-insensitive loss function has been replaced by a cost function which corresponds to a form of ridge regression. In this way nonlinear function estimation is done by solving a linear set of equations instead of solving a quadratic programming problem. The LS-SVM formulation also involves less tuning parameters. However, a drawback is that sparseness is lost in the LS-SVM case. In this paper we investigate imposing sparseness by pruning support values from the sorted support value spectrum which results from the solution to the linear system.


IEEE Transactions on Circuits and Systems I-regular Papers | 2000

Recurrent least squares support vector machines

Johan A. K. Suykens; Joos Vandewalle

The method of support vector machines (SVMs) has been developed for solving classification and static function approximation problems. In this paper we introduce SVMs within the context of recurrent neural networks. Instead of Vapniks epsilon insensitive loss function, we consider a least squares version related to a cost function with equality constraints for a recurrent network. Essential features of SVMs remain, such as Mercers condition and the fact that the output weights are a Lagrange multiplier weighted sum of the data points. The solution to recurrent least squares (LS-SVMs) is characterized by a set of nonlinear equations. Due to its high computational complexity, we focus on a limited case of assigning the squared error an infinitely large penalty factor with early stopping as a form of regularization. The effectiveness of the approach is demonstrated on trajectory learning of the double scroll attractor in Chuas circuit.


Neural Computation | 2002

Bayesian framework for least-squares support vector machine classifiers, Gaussian processes, and kernel fisher discriminant analysis

T. Van Gestel; Johan A. K. Suykens; Gert R. G. Lanckriet; A. Lambrechts; B. De Moor; Joos Vandewalle

The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence frame-work is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model param-eters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.


IEEE Transactions on Circuits and Systems I-regular Papers | 2004

True random bit generation from a double-scroll attractor

Mustak E. Yalcin; Johan A. K. Suykens; Joos Vandewalle

In this paper, a novel true random bit generator (TRBG) based on a double-scroll attractor is proposed. The double-scroll attractor is obtained from a simple model which is qualitatively similar to Chuas circuit. In order to face the challenge of using the proposed TRBG in cryptography, the proposed TRBG is subjected to statistical tests which are the well-known Federal Information Processing Standards-140-1 and Diehard test suite in the area of cryptography. The proposed TRBG successfully passes all these tests and can be implemented in integrated circuits.

Collaboration


Dive into the Johan A. K. Suykens's collaboration.

Top Co-Authors

Avatar

Joos Vandewalle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Bart De Moor

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

B. De Moor

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Kristiaan Pelckmans

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Rocco Langone

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Siamak Mehrkanoon

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Carlos Alzate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Sabine Van Huffel

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Xiaolin Huang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Raghvendra Mall

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge