Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurent Albera is active.

Publication


Featured researches published by Laurent Albera.


IEEE Signal Processing Magazine | 2008

Ica: a potential tool for bci systems

Amar Kachenoura; Laurent Albera; Lotfi Senhadji; Pierre Comon

Several studies dealing with independent component analysis (ICA)-based brain-computer interface (BCI) systems have been reported. Most of them have only explored a limited number of ICA methods, mainly FastICA and INFOMAX. The aim of this article is to help the BCI community researchers, especially those who are not familiar with ICA techniques, to choose an appropriate ICA method. For this purpose, the concept of ICA is reviewed and different measures of statistical independence are reported. Then, the application of these measures is illustrated through a brief description of the widely used algorithms in the ICA community, namely SOBI, COM2, JADE, ICAR, FastICA, and INFOMAX. The implementation of these techniques in the BCI field is also explained. Finally, a comparative study of these algorithms, conducted on simulated electroencephalography (EEG) data, shows that an appropriate selection of an ICA algorithm may significantly improve the capabilities of BCI systems.


IEEE Transactions on Signal Processing | 2005

On the virtual array concept for higher order array processing

Pascal Chevalier; Laurent Albera; Anne Ferreol; Pierre Comon

For about two decades, many fourth order (FO) array processing methods have been developed for both direction finding and blind identification of non-Gaussian signals. One of the main interests in using FO cumulants only instead of second-order (SO) ones in array processing applications relies on the increase of both the effective aperture and the number of sensors of the considered array, which eventually introduces the FO Virtual Array concept presented elsewhere and allows, in particular, a better resolution and the processing of more sources than sensors. To still increase the resolution and the number of sources to be processed from a given array of sensors, new families of blind identification, source separation, and direction finding methods, at an order m=2q (q/spl ges/2) only, have been developed recently. In this context, the purpose of this paper is to provide some important insights into the mechanisms and, more particularly, to both the resolution and the maximal processing capacity, of numerous 2qth order array processing methods, whose previous methods are part of, by extending the Virtual Array concept to an arbitrary even order for several arrangements of the data statistics and for arrays with space, angular and/or polarization diversity.


IEEE Transactions on Signal Processing | 2006

High-Resolution Direction Finding From Higher Order Statistics: The

Pascal Chevalier; Anne Ferreol; Laurent Albera

From the beginning of the 1980s, many second-order (SO) high-resolution direction-finding methods, such as the MUSIC method (or 2-MUSIC), have been developed mainly to process efficiently the multisource environments. Despite of their great interests, these methods suffer from serious drawbacks such as a weak robustness to both modeling errors and the presence of a strong colored background noise whose spatial coherence is unknown, poor performance in the presence of several poorly angularly separated sources from a limited duration observation and a maximum of N-1 sources to be processed from an array of N sensors. Mainly to overcome these limitations and in particular to increase both the resolution and the number of sources to be processed from an array of N sensors, fourth-order (FO) high-resolution direction-finding methods have been developed, from the end of the 1980s, to process non-Gaussian sources, omnipresent in radio communications, among which the 4-MUSIC method is the most popular. To increase even more the resolution, the robustness to modeling errors, and the number of sources to be processed from a given array of sensors, and thus to minimize the number of sensors in operational contexts, we propose in this paper an extension of the MUSIC method to an arbitrary even order 2q (qges1), giving rise to the 2q-MUSIC methods. The performance analysis of these new methods show off new important results for direction-finding applications and in particular the best performances, with respect to 2-MUSIC and 4-MUSIC, of 2q-MUSIC methods with q>2, despite their higher variance, when some resolution is required


IEEE Transactions on Signal Processing | 2005

2rm q

Anne Ferreol; Laurent Albera; Pascal Chevalier

For about two decades, numerous methods have been developed to blindly identify overdetermined (P/spl les/N) mixtures of P statistically independent narrowband (NB) sources received by an array of N sensors. These methods exploit the information contained in the second-order (SO), the fourth-order (FO) or both the SO and FO statistics of the data. However, in practical situations, the probability of receiving more sources than sensors increases with the reception bandwidth and the use of blind identification (BI) methods able to process underdetermined mixtures of sources, for which P>N may be required. Although such methods have been developed over the past few years, they all present serious limitations in practical situations related to the radiocommunications context. For this reason, the purpose of this paper is to propose a new attractive BI method, exploiting the information contained in the FO data statistics only, that is able to process underdetermined mixtures of sources without the main limitations of the existing methods, provided that the sources have different trispectrum and nonzero kurtosis with the same sign. A new performance criterion that is able to quantify the identification quality of a given source and allowing the quantitative comparison of two BI methods for each source, is also proposed in the paper. Finally, an application of the proposed method is presented through the introduction of a powerful direction-finding method built from the blindly identified mixture matrix.


EURASIP Journal on Advances in Signal Processing | 2012

-MUSIC Algorithm

Doha Safieddine; Amar Kachenoura; Laurent Albera; Gwénaël Birot; Ahmad Karfoul; Anca Pasnicu; Arnaud Biraben; Fabrice Wendling; Lotfi Senhadji; Isabelle Merlet

Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals. Eventually, some insights into the numerical complexity of these four algorithms are given.


Signal Processing | 2011

Fourth-order blind identification of underdetermined mixtures of sources (FOBIUM)

Julien Fleureau; Amar Kachenoura; Laurent Albera; Jean-Claude Nunes; Lotfi Senhadji

Empirical Mode Decomposition (EMD) is an emerging topic in signal processing research, applied in various practical fields due in particular to its data-driven filter bank properties. In this paper, a novel EMD approach called X-EMD (eXtended-EMD) is proposed, which allows for a straightforward decomposition of mono- and multivariate signals without any change in the core of the algorithm. Qualitative results illustrate the good behavior of the proposed algorithm whatever the signal dimension is. Moreover, a comparative study of X-EMD with classical mono- and multivariate methods is presented and shows its competitiveness. Besides, we show that X-EMD extends the filter bank properties enjoyed by monovariate EMD to the case of multivariate EMD. Finally, a practical application on multichannel sleep recording is presented.


NeuroImage | 2014

Removal of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) and deterministic (EMD and wavelet-based) approaches

Hanna Becker; Laurent Albera; Pierre Comon; Martin Haardt; Gwénaël Birot; Fabrice Wendling; Martine Gavaret; Christian-George Bénar; Isabelle Merlet

The localization of brain sources based on EEG measurements is a topic that has attracted a lot of attention in the last decades and many different source localization algorithms have been proposed. However, their performance is limited in the case of several simultaneously active brain regions and low signal-to-noise ratios. To overcome these problems, tensor-based preprocessing can be applied, which consists in constructing a space-time-frequency (STF) or space-time-wave-vector (STWV) tensor and decomposing it using the Canonical Polyadic (CP) decomposition. In this paper, we present a new algorithm for the accurate localization of extended sources based on the results of the tensor decomposition. Furthermore, we conduct a detailed study of the tensor-based preprocessing methods, including an analysis of their theoretical foundation, their computational complexity, and their performance for realistic simulated data in comparison to conventional source localization algorithms such as sLORETA, cortical LORETA (cLORETA), and 4-ExSo-MUSIC. Our objective consists, on the one hand, in demonstrating the gain in performance that can be achieved by tensor-based preprocessing, and, on the other hand, in pointing out the limits and drawbacks of this method. Finally, we validate the STF and STWV techniques on real measurements to demonstrate their usefulness for practical applications.


IEEE Transactions on Signal Processing | 2005

Multivariate empirical mode decomposition and application to multichannel filtering

Laurent Albera; Anne Ferreol; Pascal Chevalier; Pierre Comon

The problem of blind separation of overdetermined mixtures of sources, that is, with fewer sources than (or as many sources as) sensors, is addressed in this paper. A new method, called Independent Component Analysis using Redundancies in the quadricovariance (ICAR), is proposed in order to process complex data. This method, without any whitening operation, only exploits some redundancies of a particular quadricovariance matrix of the data. Computer simulations demonstrate that ICAR offers in general good results and even outperforms classical methods in several situations: ICAR i) succeeds in separating sources with low signal-to-noise ratios, ii) does not require sources with different second-order or/and first-order spectral densities, iii) is asymptotically not affected by the presence of a Gaussian noise with unknown spatial correlation, iv) is not sensitive to an over estimation of the number of sources.


IEEE Transactions on Signal Processing | 2004

EEG extended source localization: Tensor-based vs. conventional methods.

Anne Ferreol; Pascal Chevalier; Laurent Albera

Most of the second-order (SO) and higher order (HO) blind source separation (BSS) methods developed this last decade aim at blindly separating statistically independent sources that are assumed zero-mean, stationary, and ergodic. Nevertheless, in many situations of practical interest, such as in radiocommunications contexts, the sources are nonstationary and very often cyclostationary (digital modulations). The behavior of the current SO and fourth-order (FO) cumulant-based BSS methods in the presence of cyclostationary sources has been analyzed, recently, in a previous paper by Ferre/spl acute/ol and Chevalier, assuming zero-mean sources. However, some cyclostationary sources used in practical situations are not zero-mean but have a first-order (FIO) cyclostationarity property, which is, in particular, the case for some amplitude modulated (AM) signals and for some nonlinearly modulated digital sources such as frequency shift keying (FSK) or some continuous phase frequency shift keying (CPFSK) sources. For such sources, the results presented in the previous paper by Ferre/spl acute/ol and Chevalier no longer hold, and the purpose of this paper is to analyze the behavior and to propose adaptations of the current SO BSS methods for sources that are both FIO and SO cyclostationary and cyclo-ergodic. An extension for deterministic sources is also proposed in the paper.


Journal of Neuroscience Methods | 2013

ICAR: a tool for blind source separation using fourth-order statistics only

Gwénaël Birot; Amar Kachenoura; Laurent Albera; Christian Bénar; Fabrice Wendling

OBJECTIVE We propose a new method for automatic detection of fast ripples (FRs) which have been identified as a potential biomarker of epileptogenic processes. METHODS This method is based on a two-stage procedure: (i) global detection of events of interest (EOIs, defined as transient signals accompanied with an energy increase in the frequency band of interest 250-600Hz) and (ii) local energy vs. frequency analysis of detected EOIs for classification as FRs, interictal epileptic spikes or artifacts. For this second stage, two variants were implemented based either on Fourier or wavelet transform. The method was evaluated on simulated and real depth-EEG signals (human, animal). The performance criterion was based on receiving operator characteristics. RESULTS The proposed detector showed high performance in terms of sensitivity and specificity. CONCLUSIONS As designed to specifically detect FRs, the method outperforms any method simply based on the detection of energy changes in high-pass filtered signals and avoids spurious detections caused by sharp transient events often present in raw signals. SIGNIFICANCE In most of epilepsy surgery units, huge data sets are generated during pre-surgical evaluation. We think that the proposed detection method can dramatically decrease the workload in assessing the presence of FRs in intracranial EEGs.

Collaboration


Dive into the Laurent Albera's collaboration.

Top Co-Authors

Avatar

Pierre Comon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pascal Chevalier

Conservatoire national des arts et métiers

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Luciani

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge