Cesar F. Caiafa
National Scientific and Technical Research Council
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cesar F. Caiafa.
IEEE Signal Processing Magazine | 2015
Andrzej Cichocki; Danilo P. Mandic; Lieven De Lathauwer; Guoxu Zhou; Qibin Zhao; Cesar F. Caiafa; Huy Anh Phan
The widespread use of multisensor technology and the emergence of big data sets have highlighted the limitations of standard flat-view matrix models and the necessity to move toward more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift toward models that are essentially polynomial, the uniqueness of which, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.
Neural Computation | 2013
Cesar F. Caiafa; Andrzej Cichocki
Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors), which requires finding the sparsest solution of an underdetermined linear system of algebraic equations. In this letter, we generalize the theory of sparse representations of vectors to multiway arrays (tensors)—signals with a multidimensional structure—by using the Tucker model. Thus, the problem is reduced to solving a large-scale underdetermined linear system of equations possessing a Kronecker structure, for which we have developed a greedy algorithm, Kronecker-OMP, as a generalization of the classical orthogonal matching pursuit (OMP) algorithm for vectors. We also introduce the concept of multiway block-sparse representation of N-way arrays and develop a new greedy algorithm that exploits not only the Kronecker structure but also block sparsity. This allows us to derive a very fast and memory-efficient algorithm called N-BOMP (N-way block OMP). We theoretically demonstrate that under the block-sparsity assumption, our N-BOMP algorithm not only has a considerably lower complexity but is also more precise than the classic OMP algorithm. Moreover, our algorithms can be used for very large-scale problems, which are intractable using standard approaches. We provide several simulations illustrating our results and comparing our algorithms to classical algorithms such as OMP and BP (basis pursuit) algorithms. We also apply the N-BOMP algorithm as a fast solution for the compressed sensing (CS) problem with large-scale data sets, in particular, for 2D compressive imaging (CI) and 3D hyperspectral CI, and we show examples with real-world multidimensional signals.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Qibin Zhao; Cesar F. Caiafa; Danilo P. Mandic; Zenas C. Chao; Yasuo Nagasaka; Naotaka Fujii; Liqing Zhang; Andrzej Cichocki
A new generalized multilinear regression model, termed the higher order partial least squares (HOPLS), is introduced with the aim to predict a tensor (multiway array) Y from a tensor X through projecting the data onto the latent space and performing regression on the corresponding latent variables. HOPLS differs substantially from other regression models in that it explains the data by a sum of orthogonal Tucker tensors, while the number of orthogonal loadings serves as a parameter to control model complexity and prevent overfitting. The low-dimensional latent space is optimized sequentially via a deflation operation, yielding the best joint subspace approximation for both X and Y. Instead of decomposing X and Y individually, higher order singular value decomposition on a newly defined generalized cross-covariance tensor is employed to optimize the orthogonal loadings. A systematic comparison on both synthetic data and real-world decoding of 3D movement trajectories from electrocorticogram signals demonstrate the advantages of HOPLS over the existing methods in terms of better predictive ability, suitability to handle small sample sizes, and robustness to noise.
international workshop on machine learning for signal processing | 2008
Andrzej Cichocki; Anh Huy Phan; Cesar F. Caiafa
In this paper we propose a family of new algorithms for non-negative matrix/tensor factorization (NMF/NTF) and sparse nonnegative coding and representation that has many potential applications in computational neuroscience, multi-sensory, multidimensional data analysis and text mining. We have developed a class of local algorithms which are extensions of hierarchical alternating least squares (HALS) algorithms proposed by us in . For these purposes, we have performed simultaneous constrained minimization of a set of robust cost functions called alpha and beta divergences. Our algorithms are locally stable and work well for the NMF blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, new algorithms can be potentially accommodated to different noise statistics by just adjusting a single parameter. Extensive experimental results confirm the validity and high performance of the developed algorithms, especially, with usage of the multi-layer hierarchical approach .
Signal Processing | 2008
Cesar F. Caiafa; Emanuele Salerno; Araceli N. Proto; L. Fiumi
We approach the estimation of material percentages per pixel (endmember fractional abundances) in hyperspectral remote-sensed images as a blind source separation problem. This task is commonly known as spectral unmixing. Classical techniques require the knowledge of the existing materials and their spectra, which is an unrealistic situation in most cases. In contrast to recently presented blind techniques based on independent component analysis, we implement here a dependent component analysis strategy, namely the MaxNG (maximum non-Gaussianity) algorithm, which is capable to separate even strongly dependent signals. We prove that, when the abundances verify a separability condition, they can be extracted by searching the local maxima of non-Gaussianity. We also provide enough theoretical as well as experimental facts that indicate that this condition holds true for endmember abundances. In addition, we discuss the implementation of MaxNG in a noisy scenario, we introduce a new technique for the removal of scale ambiguities of estimated sources, and a new fast algorithm for the calculation of a Parzen windows-based NG measure. We compare MaxNG to commonly used independent component analysis algorithms, such as FastICA and JADE. We analyze the efficiency of MaxNG in terms of the number of sensor channels, the number of available samples and other factors, by testing it on synthetically generated as well as real data. Finally, we present some examples of application of our technique to real images captured by the MIVIS airborne imaging spectrometer. Our results show that MaxNG is a good tool for spectral unmixing in a blind scenario.
Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery | 2013
Cesar F. Caiafa; Andrzej Cichocki
Compressed sensing (CS) comprises a set of relatively new techniques that exploit the underlying structure of data sets allowing their reconstruction from compressed versions or incomplete information. CS reconstruction algorithms are essentially nonlinear, demanding heavy computation overhead and large storage memory, especially in the case of multidimensional signals. Excellent review papers discussing CS state‐of‐the‐art theory and algorithms already exist in the literature, which mostly consider data sets in vector forms. In this paper, we give an overview of existing techniques with special focus on the treatment of multidimensional signals (tensors). We discuss recent trends that exploit the natural multidimensional structure of signals (tensors) achieving simple and efficient CS algorithms. The Kronecker structure of dictionaries is emphasized and its equivalence to the Tucker tensor decomposition is exploited allowing us to use tensor tools and models for CS. Several examples based on real world multidimensional signals are presented, illustrating common problems in signal processing such as the recovery of signals from compressed measurements for magnetic resonance imaging (MRI) signals or for hyper‐spectral imaging, and the tensor completion problem (multidimensional inpainting). WIREs Data Mining Knowl Discov 2013, 3:355–380. doi: 10.1002/widm.1108
IEEE Transactions on Signal Processing | 2015
Cesar F. Caiafa; Andrzej Cichocki
In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an Nth-order data tensor X ∈ \BBRI1×I2×...×IN from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order N. In addition, it is proved that, in the matrix case and in a particular case with 3rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction Xτ is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where τ is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter τ = τ0 > 0 exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using τ = 0, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e., it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets.
international conference on acoustics, speech, and signal processing | 2012
Cesar F. Caiafa; Andrzej Cichocki
In this paper, we consider sparse representations of multidimensional signals (tensors) by generalizing the one-dimensional case (vectors). A new greedy algorithm, namely the Tensor-OMP algorithm, is proposed to compute a block-sparse representation of a tensor with respect to a Kronecker basis where the non-zero coefficients are restricted to be located within a sub-tensor (block). It is demonstrated, through simulation examples, the advantage of considering the Kronecker structure together with the block-sparsity property obtaining faster and more precise sparse representations of tensors compared to the case of applying the classical OMP (Orthogonal Matching Pursuit).
Physica A-statistical Mechanics and Its Applications | 2010
Leonidas Facundo Caram; Cesar F. Caiafa; Araceli N. Proto; Marcel Ausloos
The dynamic behavior of a multiagent system in which the agent size si is variable it is studied along a Lotka–Volterra approach. The agent size has hereby the meaning of the fraction of a given market that an agent is able to capture (market share). A Lotka–Volterra system of equations for prey–predator problems is considered, the competition factor being related to the difference in size between the agents in a one-on-one competition. This mechanism introduces a natural self-organized dynamic competition among agents. In the competition factor, a parameter σ is introduced for scaling the intensity of agent size similarity, which varies in each iteration cycle. The fixed points of this system are analytically found and their stability analyzed for small systems (with n=5 agents). We have found that different scenarios are possible, from chaotic to non-chaotic motion with cluster formation as function of the σ parameter and depending on the initial conditions imposed to the system. The present contribution aim is to show how a realistic though minimalist nonlinear dynamics model can be used to describe the market competition (companies, brokers, decision makers) among other opinion maker communities.
Neural Computation | 2009
Cesar F. Caiafa; Andrzej Cichocki
In this letter, we propose a new algorithm for estimating sparse nonnegative sources from a set of noisy linear mixtures. In particular, we consider difficult situations with high noise levels and more sources than sensors (underdetermined case). We show that when sources are very sparse in time and overlapped at some locations, they can be recovered even with very low signal-to-noise ratio, and by using many fewer sensors than sources. A theoretical analysis based on Bayesian estimation tools is included showing strong connections with algorithms in related areas of research such as ICA, NMF, FOCUSS, and sparse representation of data with overcomplete dictionaries. Our algorithm uses a Bayesian approach by modeling sparse signals through mixed-state random variables. This new model for priors imposes 0 norm-based sparsity. We start our analysis for the case of nonoverlapped sources (1-sparse), which allows us to simplify the search of the posterior maximum avoiding a combinatorial search. General algorithms for overlapped cases, such as 2-sparse and k-sparse sources, are derived by using the algorithm for 1-sparse signals recursively. Additionally, a combination of our MAP algorithm with the NN-KSVD algorithm is proposed for estimating the mixing matrix and the sources simultaneously in a real blind fashion. A complete set of simulation results is included showing the performance of our algorithm.