Rafael E. Carrillo
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rafael E. Carrillo.
international conference on acoustics, speech, and signal processing | 2011
Luisa F. Polania; Rafael E. Carrillo; Manuel Blanco-Velasco; Kenneth E. Barner
Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals that enables sampling rates significantly below the classical Nyquist rate. Based on the fact that electrocardiogram (ECG) signals can be approximated by a linear combination of a few coefficients taken from a Wavelet basis, we propose a compressed sensing-based approach for ECG signal compression. ECG signals generally show redundancy between adjacent heartbeats due to its quasi-periodic structure. We show that this redundancy implies a high fraction of common support between consecutive heartbeats. The contribution of this paper lies in the use of distributed compressed sensing to exploit the common support between samples of jointly sparse adjacent beats. Simulation results suggest that compressed sensing should be considered as a plausible methodology for ECG compression.
IEEE Journal of Selected Topics in Signal Processing | 2010
Rafael E. Carrillo; Kenneth E. Barner; Tuncer C. Aysal
Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing, and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper, we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on L 1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments.
Monthly Notices of the Royal Astronomical Society | 2012
Rafael E. Carrillo; Jason D. McEwen; Yves Wiaux
We propose a novel algorithm for image reconstruction in radio interferometry. The ill-posed inverse problem associated with the incomplete Fourier sampling identified by the visibility measurements is regularized by the assumption of average signal sparsity over representations in multiple wavelet bases. The algorithm, defined in the versatile framework of convex optimization, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We show through simulations that the proposed approach outperforms state-of-the-art imaging methods in the field, which are based on the assumption of signal sparsity in a single basis only.
Monthly Notices of the Royal Astronomical Society | 2014
Rafael E. Carrillo; Jason D. McEwen; Yves Wiaux
In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a minimization problem for image reconstruction. This approach was shown, in theory and through simulations in a simple discrete visibility setting, to have the potential to outperform significantly CLEAN and its evolutions. In this work, we leverage the versatility of convex optimization in solving minimization problems to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted relies on the simultaneous-direction method of multipliers (SDMM), and contrasts with the current major-minor cycle structure of CLEAN and its evolutions, which in particular cannot handle the state-of-the-art minimization problems under consideration where neither the regularization term nor the data term are differentiable functions. We release a beta version of an SDMM-based imaging software written in C and dubbed PURIFY (http://basp-group.github.io/purify/) that handles various sparsity priors, including our recent average sparsity approach SARA. We evaluate the performance of different priors through simulations in the continuous visibility setting, confirming the superiority of SARA.
international conference on acoustics, speech, and signal processing | 2010
Rafael E. Carrillo; Luisa F. Polania; Kenneth E. Barner
Recent works in modified compressed sensing (CS) show that reconstruction of sparse or compressible signals with partially known support yields better results than traditional CS. In this paper, we extend the ideas of these works to modify three iterative algorithms to incorporate the known support in the recovery process. The performance and effect of the prior information are studied through simulations. Results show that the modification of iterative algorithms improves their performance, needing fewer samples to yield an approximate reconstruction.
IEEE Journal of Biomedical and Health Informatics | 2015
Luisa F. Polania; Rafael E. Carrillo; Manuel Blanco-Velasco; Kenneth E. Barner
Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet-based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.
IEEE Signal Processing Letters | 2013
Rafael E. Carrillo; Jason D. McEwen; Dimitri Van De Ville; Jean-Philippe Thiran; Yves Wiaux
We discuss a novel sparsity prior for compressive imaging in the context of the theory of compressed sensing with coherent redundant dictionaries, based on the observation that natural images exhibit strong average sparsity over multiple coherent frames. We test our prior and the associated algorithm, based on an analysis reweighted formulation, through extensive numerical simulations on natural images for spread spectrum and random Gaussian acquisition schemes. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. Code and test data are available at https://github.com/basp-group/sopt.
Monthly Notices of the Royal Astronomical Society | 2016
Alexandru Onose; Rafael E. Carrillo; Audrey Repetti; Jason D. McEwen; Jean-Philippe Thiran; Jean-Christophe Pesquet; Yves Wiaux
In the context of next generation radio telescopes, like the Square Kilometre Array, the efficient processing of large-scale datasets is extremely important. Convex optimisation tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimisation algorithmic structures able to solve the convex optimisation tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularisation function, in particular the well studied l1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomisation, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our Matlab code is available online on GitHub.
Pattern Recognition | 2013
Yin Zhou; Kai Liu; Rafael E. Carrillo; Kenneth E. Barner; Fouad Kiamilev
In this paper, we propose a novel sparse representation based framework for classifying complicated human gestures captured as multi-variate time series (MTS). The novel feature extraction strategy, CovSVDK, can overcome the problem of inconsistent lengths among MTS data and is robust to the large variability within human gestures. Compared with PCA and LDA, the CovSVDK features are more effective in preserving discriminative information and are more efficient to compute over large-scale MTS datasets. In addition, we propose a new approach to kernelize sparse representation. Through kernelization, realized dictionary atoms are more separable for sparse coding algorithms and nonlinear relationships among data are conveniently transformed into linear relationships in the kernel space, which leads to more effective classification. Finally, the superiority of the proposed framework is demonstrated through extensive experiments.
conference on information sciences and systems | 2009
Rafael E. Carrillo; Kenneth E. Barner
Finding sparse solutions of under-determined systems of linear equations is a problem of significance importance in signal processing and statistics. In this paper we study an iterative reweighted least squares (IRLS) approach to find sparse solutions of underdetermined system of equations based on smooth approximation of the L0 norm and the method is extended to find sparse solutions from noisy measurements. Analysis of the proposed methods show that weaker conditions on the sensing matrices are required. Simulation results demonstrate that the proposed method requires fewer samples than existing methods, while maintaining a reconstruction error of the same order and demanding less computational complexity.