Laurent Sorber
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laurent Sorber.
Siam Journal on Optimization | 2013
Laurent Sorber; Marc Van Barel; Lieven De Lathauwer
The canonical polyadic and rank-
IEEE Journal of Selected Topics in Signal Processing | 2015
Laurent Sorber; Marc Van Barel; Lieven De Lathauwer
(L_r,L_r,1)
Siam Journal on Optimization | 2012
Laurent Sorber; Marc Van Barel; Lieven De Lathauwer
block term decomposition (CPD and BTD, respectively) are two closely related tensor decompositions. The CPD and, recently, BTD are important tools in psychometrics, chemometrics, neuroscience, and signal processing. We present a decomposition that generalizes these two and develop algorithms for its computation. Among these algorithms are alternating least squares schemes, several general unconstrained optimization techniques, and matrix-free nonlinear least squares methods. In the latter we exploit the structure of the Jacobians Gramian to reduce computational and memory cost. Combined with an effective preconditioner, numerical experiments confirm that these methods are among the most efficient and robust currently available for computing the CPD, rank-
IEEE Signal Processing Magazine | 2014
Nico Vervliet; Otto Debals; Laurent Sorber; Lieven De Lathauwer
(L_r,L_r,1)
EURASIP Journal on Advances in Signal Processing | 2014
Borbála Hunyadi; Daan Camps; Laurent Sorber; Wim Van Paesschen; Maarten De Vos; Sabine Van Huffel; Lieven De Lathauwer
BTD, and their generalized decomposition.
SIAM Journal on Numerical Analysis | 2014
Laurent Sorber; Marc Van Barel; Lieven De Lathauwer
We present structured data fusion (SDF) as a framework for the rapid prototyping of knowledge discovery in one or more possibly incomplete data sets. In SDF, each data set-stored as a dense, sparse, or incomplete tensor-is factorized with a matrix or tensor decomposition. Factorizations can be coupled, or fused, with each other by indicating which factors should be shared between data sets. At the same time, factors may be imposed to have any type of structure that can be constructed as an explicit function of some underlying variables. With the right choice of decomposition type and factor structure, even well-known matrix factorizations such as the eigenvalue decomposition, singular value decomposition and QR factorization can be computed with SDF. A domain specific language (DSL) for SDF is implemented as part of the software package Tensorlab, with which we offer a library of tensor decompositions and factor structures to choose from. The versatility of the SDF framework is demonstrated by means of four diverse applications, which are all solved entirely within Tensorlabs DSL.
Computational Optimization and Applications | 2016
Laurent Sorber; Ignat Domanov; Marc Van Barel; Lieven De Lathauwer
Nonlinear optimization problems in complex variables are frequently encountered in applied mathematics and engineering applications such as control theory, signal processing, and electrical engineering. Optimization of these problems often requires a first- or second-order approximation of the objective function to generate a new step or descent direction. However, such methods cannot be applied to real functions of complex variables because they are necessarily nonanalytic in their argument, i.e., the Taylor series expansion in their argument alone does not exist. To overcome this problem, the objective function is usually redefined as a function of the real and imaginary parts of its complex argument so that standard optimization methods can be applied. However, this approach may needlessly disguise any inherent structure present in the derivatives of such complex problems. Although little known, it is possible to construct an expansion of the objective function in its original complex variables by noti...
Electronic Transactions on Numerical Analysis | 2013
Marc Van Barel; Matthias Humet; Laurent Sorber
Higher-order tensors and their decompositions are abundantly present in domains such as signal processing (e.g., higher-order statistics [1] and sensor array processing [2]), scientific computing (e.g., discretized multivariate functions [3]?[6]), and quantum information theory (e.g., representation of quantum many-body states [7]). In many applications, the possibly huge tensors can be approximated well by compact multilinear models or decompositions. Tensor decompositions are more versatile tools than the linear models resulting from traditional matrix approaches. Compared to matrices, tensors have at least one extra dimension. The number of elements in a tensor increases exponentially with the number of dimensions, and so do the computational and memory requirements. The exponential dependency (and the problems that are caused by it) is called the curse of dimensionality. The curse limits the order of the tensors that can be handled. Even for a modest order, tensor problems are often large scale. Large tensors can be handled, and the curse can be alleviated or even removed by using a decomposition that represents the tensor instead of using the tensor itself. However, most decomposition algorithms require full tensors, which renders these algorithms infeasible for many data sets. If a tensor can be represented by a decomposition, this hypothesized structure can be exploited by using compressed sensing (CS) methods working on incomplete tensors, i.e., tensors with only a few known elements.
Calcolo | 2013
Thanh Hieu Le; Laurent Sorber; Marc Van Barel
Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10–i18, 2007; NeuroImage 37:844–854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- (Lr,Lr,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.
Archive | 2014
Nico Vervliet; Otto Debals; Laurent Sorber; Lieven De Lathauwer
Finding the real solutions of a bivariate polynomial system is a central problem in robotics, computer modeling and graphics, computational geometry, and numerical optimization. We propose an efficient and numerically robust algorithm for solving bivariate and polyanalytic polynomial systems using a single generalized eigenvalue decomposition. In contrast to existing eigen-based solvers, the proposed algorithm does not depend on Grobner bases or normal sets, nor does it require computing eigenvectors or solving additional eigenproblems to recover the solution. The method transforms bivariate systems into polyanalytic systems and then uses resultants in a novel way to project the variables onto the real plane associated with the two variables. Solutions are returned counting multiplicity and their accuracy is maximized by means of numerical balancing and Newton--Raphson refinement. Numerical experiments show that the proposed algorithm consistently recovers a higher percentage of solutions and is at the sa...