Nico Vervliet
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nico Vervliet.
IEEE Signal Processing Magazine | 2014
Nico Vervliet; Otto Debals; Laurent Sorber; Lieven De Lathauwer
Higher-order tensors and their decompositions are abundantly present in domains such as signal processing (e.g., higher-order statistics [1] and sensor array processing [2]), scientific computing (e.g., discretized multivariate functions [3]?[6]), and quantum information theory (e.g., representation of quantum many-body states [7]). In many applications, the possibly huge tensors can be approximated well by compact multilinear models or decompositions. Tensor decompositions are more versatile tools than the linear models resulting from traditional matrix approaches. Compared to matrices, tensors have at least one extra dimension. The number of elements in a tensor increases exponentially with the number of dimensions, and so do the computational and memory requirements. The exponential dependency (and the problems that are caused by it) is called the curse of dimensionality. The curse limits the order of the tensors that can be handled. Even for a modest order, tensor problems are often large scale. Large tensors can be handled, and the curse can be alleviated or even removed by using a decomposition that represents the tensor instead of using the tensor itself. However, most decomposition algorithms require full tensors, which renders these algorithms infeasible for many data sets. If a tensor can be represented by a decomposition, this hypothesized structure can be exploited by using compressed sensing (CS) methods working on incomplete tensors, i.e., tensors with only a few known elements.
IEEE Journal of Selected Topics in Signal Processing | 2016
Nico Vervliet; Lieven De Lathauwer
For the analysis of large-scale datasets one often assumes simple structures. In the case of tensors, a decomposition in a sum of rank-1 terms provides a compact and informative model. Finding this decomposition is intrinsically more difficult than its matrix counterpart. Moreover, for large-scale tensors, computational difficulties arise due to the curse of dimensionality. The randomized block sampling canonical polyadic decomposition method presented here combines increasingly popular ideas from randomization and stochastic optimization to tackle the computational problems. Instead of decomposing the full tensor at once, updates are computed from small random block samples. Using step size restriction the decomposition can be found up to near optimal accuracy, while reducing the computation time and number of data accesses significantly. The scalability is illustrated by the decomposition of a synthetic 8 TB tensor and a real life 12.5 GB tensor in a few minutes on a standard laptop.
asilomar conference on signals, systems and computers | 2016
Nico Vervliet; Otto Debals; Lieven De Lathauwer
We give an overview of recent developments in numerical optimization-based computation of tensor decompositions that have led to the release of Tensorlab 3.0 in March 2016 (www.tensorlab.net). By careful exploitation of tensor product structure in methods such as quasi-Newton and nonlinear least squares, good convergence is combined with fast computation. A modular approach extends the computation to coupled factorizations and structured factors. Given large datasets, different compact representations (polyadic, Tucker,…) may be obtained by stochastic optimization, randomization, compressed sensing, etc. Exploiting the representation structure allows us to scale the algorithms for constrained/coupled factorizations to large problem sizes.
european signal processing conference | 2017
Michiel Vandecappelle; Nico Vervliet; Lieven De Lathauwer
Current batch tensor methods often struggle to keep up with fast-arriving data. Even storing the full tensors that have to be decomposed can be problematic. To alleviate these limitations, tensor updating methods modify a tensor decomposition using efficient updates instead of recomputing the entire decomposition when new data becomes available. In this paper, the structure of the decomposition is exploited to achieve fast updates for the canonical polyadic decomposition whenever new slices are added to the tensor in a certain mode. A batch NLS-algorithm is adapted so that it can be used in an updating context. By only storing the old decomposition and the new slice of the tensor, the algorithm is both time- and memory efficient. Experimental results show that the proposed method is faster than batch ALS and NLS methods, while maintaining a good accuracy for the decomposition.
international conference on acoustics, speech, and signal processing | 2016
Xiao-Feng Gong; Qiu-Hua Lin; Otto Debals; Nico Vervliet; Lieven De Lathauwer
Coupled decompositions of multiple tensors are fundamental tools for multi-set data fusion. In this paper, we introduce a coupled version of the rank-(Lm, Ln, *) block term decomposition (BTD), applicable to joint independent subspace analysis. We propose two algorithms for its computation based on a coupled block simultaneous generalized Schur decomposition scheme. Numerical results are given to show the performance of the proposed algorithms.
Archive | 2014
Nico Vervliet; Otto Debals; Laurent Sorber; Lieven De Lathauwer
Numerical Linear Algebra With Applications | 2018
Martijn Boussé; Nico Vervliet; Ignat Domanov; Otto Debals; L. De Lathauwer
international conference on acoustics, speech, and signal processing | 2018
Michiel Vandecappelle; Martijn Boussé; Nico Vervliet; Lieven De Lathauwer
Sport Psychologist | 2018
Frederik Van Eeghem; Otto Debals; Nico Vervliet; Lieven De Lathauwer
international conference of the ieee engineering in medicine and biology society | 2017
Martijn Boussé; Griet Goovaerts; Nico Vervliet; Otto Debals; Sabine Van Huffel; Lieven De Lathauwer