Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. De Moor is active.

Publication


Featured researches published by B. De Moor.


IEEE Transactions on Biomedical Engineering | 2000

Fetal electrocardiogram extraction by blind source subspace separation

L. De Lathauwer; B. De Moor; Joos Vandewalle

We propose the emerging technique of independent component analysis, also known as blind source separation, as an interesting tool for the extraction of the antepartum fetal electrocardiogram from multilead cutaneous potential recordings. The technique is illustrated by means of a real-life example.


Neural Networks | 2001

Optimal control by least squares support vector machines

Johan A. K. Suykens; Joos Vandewalle; B. De Moor

Support vector machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares support vector machines (LS-SVMs) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares support vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for support vector machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercers condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the number of training data, while in the primal space the number of unknowns can be infinite dimensional. The method is illustrated both on stabilization and tracking problems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.


IEEE Transactions on Neural Networks | 2001

Financial time series prediction using least squares support vector machines within the evidence framework

T. Van Gestel; Johan A. K. Suykens; Dirk-Emma Baestaens; A. Lambrechts; Gert R. G. Lanckriet; B. Vandaele; B. De Moor; Joos Vandewalle

The Bayesian evidence framework is applied in this paper to least squares support vector machine (LS-SVM) regression in order to infer nonlinear models for predicting a financial time series and the related volatility. On the first level of inference, a statistical framework is related to the LS-SVM formulation which allows one to include the time-varying volatility of the market by an appropriate choice of several hyper-parameters. The hyper-parameters of the model are inferred on the second level of inference. The inferred hyper-parameters, related to the volatility, are used to construct a volatility model within the evidence framework. Model comparison is performed on the third level of inference in order to automatically tune the parameters of the kernel function and to select the relevant inputs. The LS-SVM formulation allows one to derive analytic expressions in the feature space and practical expressions are obtained in the dual space replacing the inner product by the related kernel function using Mercers theorem. The one step ahead prediction performances obtained on the prediction of the weekly 90-day T-bill rate and the daily DAX30 closing prices show that significant out of sample sign predictions can be made with respect to the Pesaran-Timmerman test statistic.


Journal of Medical Genetics | 2006

Emerging patterns of cryptic chromosomal imbalance in patients with idiopathic mental retardation and multiple congenital anomalies: a new series of 140 patients and review of published reports

B Menten; Nicole Maas; Bernard Thienpont; Karen Buysse; J Vandesompele; C Melotte; T. de Ravel; S. Van Vooren; Irina Balikova; Liesbeth Backx; Sophie Janssens; A. De Paepe; B. De Moor; Yves Moreau; Peter Marynen; Fryns Jp; Geert Mortier; Koenraad Devriendt; F. Speleman; J.R. Vermeesch

Background: Chromosomal abnormalities are a major cause of mental retardation and multiple congenital anomalies (MCA/MR). Screening for these chromosomal imbalances has mainly been done by standard karyotyping. Previous array CGH studies on selected patients with chromosomal phenotypes and normal karyotypes suggested an incidence of 10–15% of previously unnoticed de novo chromosomal imbalances. Objective: To report array CGH screening of a series of 140 patients (the largest published so far) with idiopathic MCA/MR but normal karyotype. Results: Submicroscopic chromosomal imbalances were detected in 28 of the 140 patients (20%) and included 18 deletions, seven duplications, and three unbalanced translocations. Seventeen of 24 imbalances were confirmed de novo and 19 were assumed to be causal. Excluding subtelomeric imbalances, our study identified 11 clinically relevant interstitial submicroscopic imbalances (8%). Taking this and previously reported studies into consideration, array CGH screening with a resolution of at least 1 Mb has been undertaken on 432 patients with MCA/MR. Most imbalances are non-recurrent and spread across the genome. In at least 8.8% of these patients (38 of 432) de novo intrachromosomal alterations have been identified. Conclusions: Array CGH should be considered an essential aspect of the genetic analysis of patients with MCA/MR. In addition, in the present study three patients were mosaic for a structural chromosome rearrangement. One of these patients had monosomy 7 in as few as 8% of the cells, showing that array CGH allows detection of low grade mosaicisims.


Neural Computation | 2002

Bayesian framework for least-squares support vector machine classifiers, Gaussian processes, and kernel fisher discriminant analysis

T. Van Gestel; Johan A. K. Suykens; Gert R. G. Lanckriet; A. Lambrechts; B. De Moor; Joos Vandewalle

The Bayesian evidence framework has been successfully applied to the design of multilayer perceptrons (MLPs) in the work of MacKay. Nevertheless, the training of MLPs suffers from drawbacks like the nonconvex optimization problem and the choice of the number of hidden units. In support vector machines (SVMs) for classification, as introduced by Vapnik, a nonlinear decision boundary is obtained by mapping the input vector first in a nonlinear way to a high-dimensional kernel-induced feature space in which a linear large margin classifier is constructed. Practical expressions are formulated in the dual space in terms of the related kernel function, and the solution follows from a (convex) quadratic programming (QP) problem. In least-squares SVMs (LS-SVMs), the SVM problem formulation is modified by introducing a least-squares cost function and equality instead of inequality constraints, and the solution follows from a linear system in the dual space. Implicitly, the least-squares formulation corresponds to a regression formulation and is also related to kernel Fisher discriminant analysis. The least-squares regression formulation has advantages for deriving analytic expressions in a Bayesian evidence framework, in contrast to the classification formulations used, for example, in gaussian processes (GPs). The LS-SVM formulation has clear primal-dual interpretations, and without the bias term, one explicitly constructs a model that yields the same expressions as have been obtained with GPs for regression. In this article, the Bayesian evidence frame-work is combined with the LS-SVM classifier formulation. Starting from the feature space formulation, analytic expressions are obtained in the dual space on the different levels of Bayesian inference, while posterior class probabilities are obtained by marginalizing over the model param-eters. Empirical results obtained on 10 public domain data sets show that the LS-SVM classifier designed within the Bayesian evidence framework consistently yields good generalization performances.


Physical Review A | 2002

Four qubits can be entangled in nine different ways

Frank Verstraete; Jeroen Dehaene; B. De Moor; Henri Verschelde

We consider a single copy of a pure four-partite state of qubits and investigate its behavior under the action of stochastic local quantum operations assisted by classical communication (SLOCC). This leads to a complete classification of all different classes of pure states of four qubits. It is shown that there exist nine families of states corresponding to nine different ways of entangling four qubits. The states in the generic family give rise to Greenberger-Horne-Zeilinger-like entanglement. The other ones contain essentially two-or three-qubit entanglement distributed among the four parties. The concept of concurrence and 3-tangle is generalized to the case of mixed states of four qubits, giving rise to a seven-parameter family of entanglement monotones. Finally, the SLOCC operations maximizing all these entanglement monotones are derived, yielding the optimal single-copy distillation protocol.


international symposium on circuits and systems | 1989

The generalized linear complementarity problem applied to the complete analysis of resistive piecewise-linear circuits

Lieven Vandenberghe; B. De Moor; Joos Vandewalle

An important application of complementarity theory consists in solving sets of piecewise-linear equations and hence in the analysis of piecewise-linear resistive circuits. The authors show how a generalized version of the linear complementarity problem can be used to analyze a broad class of piecewise-linear circuits. Nonlinear resistors that are neither voltage nor current controlled can be allowed, and no restrictions on the linear part of the circuit have to be made. As a second contribution, the authors describe an algorithm for the solution of the generalized complementarity problem and show how it can be applied to yield a complete description of the DC solution set as well as of driving-point and transfer characteristics. >


IEEE Transactions on Signal Processing | 1993

The singular value decomposition and long and short spaces of noisy matrices

B. De Moor

Using geometrical, algebraic, and statistical arguments, it is clarified why and when the singular value decomposition is successful in so-called subspace methods. First the concepts of long and short spaces are introduced, and a fundamental asymmetry in the consistency properties of the estimates is discussed. The model, which is associated with the short space, can be estimated consistently, but the estimates of the original data, which follow from the long space, are always inconsistent. An expression is found for the asymptotic bias in terms of canonical angles, which can be estimated from the data. This allows all equivalent reconstructions of the original signals to be described as a matrix ball, the center of which is the minimum variance estimate. Remarkably, the canonical angles also appear in the optimal weighting that is used in weighted subspace fitting approaches. The results are illustrated with a numerical simulation. A number of examples are discussed. >


conference on decision and control | 1992

Two subspace algorithms for the identification of combined deterministic-stochastic systems

P. Van Overschee; B. De Moor

Two new subspace algorithms for identifying mixed deterministic-stochastic systems are derived. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of nonsteady-state Kalman filter banks. From these it is easy to determine the state space system matrices. The algorithms are always convergent (noninterative) and numerically stable since they only make use of QR and singular value decompositions. The two algorithms are similar, but the second one trades off accuracy for simplicity. An example involving a glass oven is considered.<<ETX>>Two new subspace algorithms for identifying mixed deterministic-stochastic systems are derived. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of nonsteady-state Kalman filter banks. From these it is easy to determine the state space system matrices. The algorithms are always convergent (noninterative) and numerically stable since they only make use of QR and singular value decompositions. The two algorithms are similar, but the second one trades off accuracy for simplicity. An example involving a glass oven is considered. >


IEEE Transactions on Neural Networks | 2003

A support vector machine formulation to PCA analysis and its kernel version

Johan A. K. Suykens; T. Van Gestel; Joos Vandewalle; B. De Moor

In this paper, we present a simple and straightforward primal-dual support vector machine formulation to the problem of principal component analysis (PCA) in dual variables. By considering a mapping to a high-dimensional feature space and application of the kernel trick (Mercer theorem), kernel PCA is obtained as introduced by Scholkopf et al. (2002). While least squares support vector machine classifiers have a natural link with the kernel Fisher discriminant analysis (minimizing the within class scatter around targets +1 and -1), for PCA analysis one can take the interpretation of a one-class modeling problem with zero target value around which one maximizes the variance. The score variables are interpreted as error variables within the problem formulation. In this way primal-dual constrained optimization problem interpretations to the linear and kernel PCA analysis are obtained in a similar style as for least square-support vector machine classifiers.

Collaboration


Dive into the B. De Moor's collaboration.

Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

D. Timmerman

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Joos Vandewalle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Anneleen Daemen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

T. Van den Bosch

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Tom Bourne

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

A. Installe

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jan C. Willems

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

P. Van Overschee

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge